Alpha
amallo chat Icon
Back to xAI

Ethics Report: xAI

Rubric: Organisation v4 · Reviewed 3/22/2026

21/100
Critical

Little to no verifiable ethical commitment

Safety & Harm Reduction

4/25
1.1

Dedicated safety / responsible-use policy

Publishes a dedicated safety/responsible-use policy that is publicly accessible.

2/5
Generic

Evidence

xAI has generic safety positioning described as a 'non-woke alternative' and 'maximum truthfulness' philosophy, but the article shows no dedicated, enforceable safety/responsible-use policy with specific prohibited uses. The company's safety governance relies primarily on X's moderation platform rather than independent documented terms.

1.2

Public bug-bounty or red-team program

Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.

0/5
None

Evidence

No public bug-bounty or red-team program is documented in the article or research sources. There is no mention of any publicly documented program like HackerOne listings or published red-team calls.

1.3

Published safety evaluation within last 24 months

Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).

0/5
None

Evidence

No published safety evaluation within the last 24 months is documented. The article mentions a major safety incident (over 3 million sexualized images generated in 11 days, including ~20,000 appearing to depict minors) but this is an incident report, not a proactive safety evaluation. No quantitative safety benchmarks or model cards are mentioned.

1.4

Documented content-filtering / guardrails

Documents content-filtering/guardrails on production endpoints with user-facing documentation.

2/5
Mentioned

Evidence

Content filtering is mentioned through xAI's integration with X's moderation system and reference to 'Spicy Mode' with relaxed settings. However, detailed documentation explaining what is filtered, why, and how users can report false positives is not provided in the article or sources. The mention of Spicy Mode suggests filters exist but are acknowledged as deliberately minimized.

1.5

Documented incident-response process

Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.

0/5
None

Evidence

No documented incident-response process is described. The article references the safeguarding failure (sexualized image generation) but provides no evidence of a dedicated reporting mechanism or response process with SLAs. There is no public incident-response documentation available.

Transparency & Trust

4/25
2.1

Training data provenance disclosure

Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.

2/5
General categories

Evidence

xAI has disclosed that Grok uses 'real-time data from X social media platform' and 'public posts' for training. The article states the model has 'access to real-time information from global public discourse' and mentions X data-sharing partnerships. However, this is general category disclosure ('web data, platform posts') without specific dataset names, curation details, or filtering criteria.

2.2

Meaningful technical documentation for flagship model(s)

Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.

2/5
Basic

Evidence

xAI has released some technical details: Grok-1 contains 314 billion parameters, uses Mixture-of-Experts (MoE) architecture, and was released under Apache 2.0 license. The article mentions it was designed for mathematical reasoning and uses formal mathematical verification approaches. However, comprehensive documentation is absent—no substantive report covering training approach, detailed architecture justification, or systematic limitation disclosures is evident.

2.3

Transparency report (takedowns, government requests, etc.)

Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.

0/5
None

Evidence

No transparency report covering takedowns, government requests, or similar disclosures is documented. The article makes no mention of any report addressing government requests, content takedowns, or data-related transparency metrics. No such report is mentioned in any research source.

2.4

ToS training data use disclosure with opt-out

ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.

0/5
Vague or absent

Evidence

The article and sources do not mention explicit ToS language regarding training data use. No disclosure statement about whether user data is used for training Grok models is documented. There is no mention of opt-out mechanisms or explicit never-use commitments in the available sources.

2.5

Creator/artist content provenance disclosure

Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).

0/5
None

Evidence

No disclosure regarding creative/copyrighted content in training data is provided. The article does not address whether creative works (art, music, text) were included in training, nor are there specific naming of content types or licensing arrangements for creative works. This is completely absent from documentation.

Human & Creator Impact

5/25
3.1

Artist/creator opt-out or removal mechanism

Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.

0/5
None

Evidence

No artist/creator opt-out or removal mechanism is documented. The article makes no mention of any form or process for artists or creators to opt out or request removal from training data. No Spawning integration, email removal process, or similar tooling is referenced.

3.2

Public licensing or revenue-sharing with creators

Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.

0/5
None

Evidence

No public licensing or revenue-sharing partnerships with creators or rights holders are announced. The article does not mention any deals with named entities for creative content licensing or creator compensation programs.

3.3

Provenance/attribution tooling for AI outputs

Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).

0/5
None

Evidence

No provenance or attribution tooling for AI outputs is mentioned. There is no commitment to C2PA metadata, watermarks, SynthID, or similar standards. The article does not address output attribution or provenance mechanisms.

3.4

Workforce impact assessment or commitment

Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).

0/5
None

Evidence

No workforce impact assessment or commitment is documented. The article discusses talent departures in early 2026 (Tony Wu, Jimmy Ba) but does not mention any published assessment or commitment regarding AI's impact on workers, nor any named initiatives or programs addressing workforce displacement.

3.5

Does NOT claim ownership over user-generated outputs

ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.

5/5
Full user ownership

Evidence

xAI's ToS and documentation do not claim ownership over user-generated outputs. The article and API documentation do not indicate any language asserting ownership, exclusive license, or control over outputs generated by users. Standard developer API practice typically grants users rights to their outputs; no contrary provision is documented.

Governance

8/25
4.1

Discloses corporate structure, investors, and board

Publicly discloses corporate structure, major investors, and board composition.

2/5
One disclosed

Evidence

xAI's corporate structure is partially disclosed. Elon Musk is identified as CEO and primary owner. Major investors are named: Andreessen Horowitz, Sequoia Capital, Valor Equity Partners, Vy Capital, Fidelity Management & Research Company, and Prince Alwaleed Bin Talal's Kingdom Holding. However, the board of directors is not publicly disclosed; only Musk and executives are named (Igor Babuschkin, Manuel Kroiss, Guodong Zhang, etc.).

4.2

Independent ethics/safety advisory board

Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.

2/5
Exists, unclear

Evidence

The article states xAI 'engaged Dan Hendrycks, the executive director of the Center for AI Safety' to advise on safety protocols during founding. However, this indicates advisory engagement rather than an independent ethics/safety board with clear governance. The nature of this role, its formal standing, independence from internal operations, and whether it constitutes a named independent advisory body is unclear.

4.3

Legal corporate structure preserving safety/mission

Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).

0/5
Standard structure

Evidence

xAI is structured as a standard privately held corporation with no special legal provisions documented. The article describes it as 'a privately held corporation' with standard venture capital funding. No mention of Benefit Corporation status, capped-profit structures, or other verifiable legal mechanisms preserving safety/mission is provided.

4.4

Public policy engagement or lobbying disclosure

Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.

2/5
Framework or positions

Evidence

xAI has engaged in public policy positions: Elon Musk 'publicly endorsed California's Senate Bill 1047, which mandates safety testing for frontier AI models.' This represents published policy positions, distinguishing xAI from other tech companies that opposed the bill. However, there is no evidence of comprehensive lobbying disclosure or participation in multiple frameworks/standards beyond this single endorsement.

4.5

No senior departures citing safety/ethics (last 36 months)

No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.

2/5
One departure

Evidence

The article documents one significant senior departure: 'By February 2026, half of the original founding team had resigned, including high-profile members Yuhuai (Tony) Wu and Jimmy Ba.' However, the article does not state these departures were publicly cited as being due to safety/ethics concerns—rather, they are attributed to 'intense internal work culture involving 12-hour shifts.' Musk 'characterized the departures as part of the natural evolution.' Only one clearly senior departure citing safety/ethics concerns would meet the threshold for a score of 2.

Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.