Alpha
amallo chat Icon
Back to OpenAI

Ethics Report: OpenAI

Rubric: Organisation v4 · Reviewed 3/22/2026

34/100
Weak

Minimal ethical infrastructure

Safety & Harm Reduction

9/25
1.1

Dedicated safety / responsible-use policy

Publishes a dedicated safety/responsible-use policy that is publicly accessible.

2/5
Generic

Evidence

OpenAI has published general safety policies and responsible use guidelines that are mentioned in multiple product documentation pages. The article references 'usage policies that prohibit the generation of harmful content, such as hate speech, instructions for illegal acts, and sexually explicit material' and mentions a 'Moderation API to help filter these outputs.' However, no evidence indicates a dedicated standalone safety policy page with specific, enforceable terms and defined prohibited uses comparable to industry leaders. The policies are integrated into broader terms of service and product documentation rather than presented as a comprehensive, dedicated safety policy.

Sources

1.2

Public bug-bounty or red-team program

Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.

2/5
Exists

Evidence

The article references 'red teaming—a process where external experts stress-test models for vulnerabilities' as part of OpenAI's safety practices. This indicates the existence of a red-team program where external experts evaluate models. However, the article provides no evidence of published results or findings from completed red-team rounds. The program is mentioned as an existing practice but without published documentation of outcomes, findings, or systematic results disclosure.

Sources

1.3

Published safety evaluation within last 24 months

Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).

3/5
Comprehensive

Evidence

OpenAI has published quantitative safety evaluations within the last 24 months. The article states: 'In late 2023, OpenAI introduced its Preparedness Framework, a living document designed to identify, track, and mitigate catastrophic risks associated with increasingly capable AI models. The framework categorizes risks into four primary areas: cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats, persuasion, and model autonomy. Under this protocol, OpenAI evaluates models against specific safety thresholds.' This demonstrates comprehensive quantitative safety benchmarks across multiple harm categories. However, there is no evidence of third-party or independent audits of these evaluations, which would be required for a score of 5.

Sources

1.4

Documented content-filtering / guardrails

Documents content-filtering/guardrails on production endpoints with user-facing documentation.

2/5
Mentioned

Evidence

The article mentions that OpenAI 'provides developers and users with a Moderation API to help filter these outputs' and references content filtering in product documentation. The article also notes 'system-level filters' are implemented. However, the evidence describes only that filters exist and are mentioned in documentation, without detailed explanation of what is filtered, why, or how users can report false positives. This meets the threshold for 'Mentioned' (score 2) but not 'Detailed' (score 5).

Sources

1.5

Documented incident-response process

Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.

0/5
None

Evidence

The article provides no evidence of a documented incident-response process with defined reporting mechanisms, SLAs, or timelines. While OpenAI certainly has operational security contact mechanisms, the article does not reference a publicly documented incident-response process or published response timelines. Generic 'contact us' mechanisms without published SLAs or response processes do not qualify for a score above 0.

Sources

Transparency & Trust

4/25
2.1

Training data provenance disclosure

Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.

2/5
General categories

Evidence

The article indicates that OpenAI's training data includes 'large datasets' and references 'vast quantities of text available for language models' and 'publicly available internet data.' The article mentions that GPT models are 'trained on large datasets to predict the next token in a sequence' and that OpenAI claims fair use of 'publicly available internet data' for training. However, these references constitute only general categories (web data, internet data) without specific dataset names, sources, or composition/curation details that would merit a score of 3 or higher.

Sources

2.2

Meaningful technical documentation for flagship model(s)

Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.

2/5
Basic

Evidence

OpenAI has published technical documentation for GPT models with some technical detail. The article states: 'GPT-3 was noted for its 175 billion parameters, a significant increase over previous iterations' and mentions that 'OpenAI researchers published Scaling Laws for Neural Language Models, which formalized these relationships.' However, the article also notes that 'the accompanying technical report provided no details regarding the model's size, hardware, or training methods' for GPT-4, citing 'competitive landscape' and 'safety implications' as reasons. The documentation provides basic technical information (parameter counts, model family) but lacks the substantive architecture, scale, training approach, and limitation disclosures required for a score of 5.

Sources

2.3

Transparency report (takedowns, government requests, etc.)

Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.

0/5
None

Evidence

The article contains no reference to OpenAI publishing a transparency report covering government requests, takedowns, or other disclosure categories. There is no mention of any report documenting government data requests, content removal requests, or similar transparency metrics that would be standard in a transparency report. The absence of any reference to such a report in the article indicates no public transparency report exists.

Sources

2.4

ToS training data use disclosure with opt-out

ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.

0/5
Vague or absent

Evidence

The article states that 'OpenAI asserts that data submitted to the Enterprise tier is not used to train its models' but provides no clear disclosure in the Terms of Service regarding how user data is used for training of standard tiers, nor any opt-out mechanism. The article references the dispute over training data use but does not document explicit ToS language or user opt-out options. The vagueness regarding standard tier data usage and absence of disclosed opt-out mechanisms results in a score of 0.

Sources

2.5

Creator/artist content provenance disclosure

Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).

0/5
None

Evidence

While the article extensively discusses OpenAI's use of copyrighted material for training (particularly the New York Times lawsuit alleging 'millions of its articles were used without permission'), there is no evidence of proactive disclosure by OpenAI regarding creative content provenance, sources, or licensing arrangements. OpenAI's defense of fair use does not constitute positive disclosure of creator/artist content provenance. No specific disclosure of content types, sources, or licensing arrangements for creative works is mentioned.

Sources

Human & Creator Impact

4/25
3.1

Artist/creator opt-out or removal mechanism

Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.

0/5
None

Evidence

The article makes no reference to any artist or creator opt-out mechanism, removal form, email process, or integration (such as with Spawning). While the article extensively discusses lawsuits from The New York Times over copyrighted content and general intellectual property disputes, it provides no evidence that OpenAI has established any actual mechanism for artists or creators to request removal of their work from training data or generated outputs. A general statement that OpenAI 'respects copyright' without a functional opt-out mechanism does not qualify.

Sources

3.2

Public licensing or revenue-sharing with creators

Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.

0/5
None

Evidence

The article contains no reference to any publicly announced licensing deals with named creative organizations, artists, or creators, nor any structured revenue-sharing arrangements with creators. While the article mentions that DALL-E, Sora, and other models generate creative content, there is no evidence of OpenAI establishing formal partnerships or licensing arrangements with the creative industries. The absence of any such public announcements results in a score of 0.

Sources

3.3

Provenance/attribution tooling for AI outputs

Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).

0/5
None

Evidence

The article provides no evidence that OpenAI has implemented or committed to any provenance or attribution tooling for AI-generated outputs. There is no mention of C2PA metadata, watermarking, SynthID, or any other mechanism to mark or trace AI-generated content. While OpenAI maintains various technical tools for users, none are documented in the article as addressing provenance/attribution for outputs.

Sources

3.4

Workforce impact assessment or commitment

Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).

2/5
Statement only

Evidence

The article states: 'OpenAI has publicly supported the exploration of Universal Basic Income (UBI) as a potential response to long-term labor displacement, with its leadership participating in large-scale studies on unconditional cash transfers.' Additionally, the article notes that 'A 2023 study co-authored by researchers from OpenAI and the University of Pennsylvania estimated that approximately 80% of the United States workforce could have at least 10% of their work tasks impacted by the integration of large language models.' This demonstrates a published statement naming a specific initiative (UBI exploration and workforce impact research) but without published measurable outcomes or a comprehensive impact assessment program.

Sources

3.5

Does NOT claim ownership over user-generated outputs

ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.

2/5
Broad license retained

Evidence

The article provides insufficient detail on user output ownership in Terms of Service. The article states that OpenAI provides developer APIs and that users can create custom GPTs through the GPT Store and share them, but does not explicitly detail whether OpenAI claims ownership, grants broad licenses, or grants full user ownership of generated outputs. Without clear evidence of full user ownership language in the ToS, and given that OpenAI retains commercial interest in its platform, the most conservative inference is that some broad license is retained by OpenAI rather than full user ownership being explicitly granted.

Sources

Governance

17/25
4.1

Discloses corporate structure, investors, and board

Publicly discloses corporate structure, major investors, and board composition.

5/5
Both disclosed

Evidence

The article provides extensive disclosure of both OpenAI's board and investors. Board members are named: 'the board is chaired by Bret Taylor and includes independent directors Adam D'Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, and Nicole Seligman, alongside CEO Sam Altman.' Major investors are disclosed: 'Microsoft remains the organization's largest individual investor and strategic partner, having committed a total of $13 billion' with early backers including 'Altman, Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, and Amazon Web Services.' Both board members and major investors are publicly disclosed through official channels.

Sources

4.2

Independent ethics/safety advisory board

Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.

3/5
One external member

Evidence

OpenAI has established an independent safety advisory body: 'the Safety and Security Committee, which was established in May 2024 to advise the board of directors on critical safety decisions' and is 'currently chaired by Dr. Kolter.' The article indicates that 'Dr. Kolter serves as a non-voting observer to the Group board to ensure a distinct focus on safety governance,' suggesting some structural independence. However, the article does not provide published mandate, detailed member information beyond the chair, or evidence of published recommendations/reports from this committee. This meets the threshold for 'One external member' (score 3) but falls short of 'Fully independent, published' (score 5).

Sources

4.3

Legal corporate structure preserving safety/mission

Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).

5/5
Legal mechanism verified

Evidence

The article documents verifiable legal mechanism for preserving safety mission: 'This structure was significantly updated on October 28, 2025, following a recapitalization that converted the organization's for-profit subsidiary into a Public Benefit Corporation (PBC) known as OpenAI Group PBC. As a PBC, the entity is legally required to advance its mission of developing safe artificial general intelligence (AGI) while considering the interests of all stakeholders, rather than prioritizing shareholder returns alone.' A Public Benefit Corporation is a verifiable legal mechanism with specific filings and legal obligations. The article further notes that 'Following its conversion to a Public Benefit Corporation in 2025, OpenAI is legally required to balance the interests of its shareholders with its mission.'

Sources

4.4

Public policy engagement or lobbying disclosure

Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.

2/5
Framework or positions

Evidence

The article indicates OpenAI engages with policy and standards but provides limited detail. The article references Microsoft's exclusive provider role and broader references to 'competitive landscape' discussions, but does not document specific public policy positions or framework participation. The article mentions Musk's lawsuit involved disputes over OpenAI's commercial orientation and does reference 'internal records released by OpenAI' suggesting some policy engagement, but there is insufficient evidence of published policy positions or frameworks that OpenAI has signed onto. This constitutes minimal engagement documentation.

Sources

4.5

No senior departures citing safety/ethics (last 36 months)

No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.

2/5
One departure

Evidence

The article documents one senior departure citing safety/ethics concerns: 'Jan Leike...expressed concerns that the team was not being allocated the promised resources to conduct its work' and 'Leike publicly stated that the organization's safety culture and processes had taken a backseat to shiny products.' Leike's departure from the Superalignment team in May 2024 is documented as directly citing safety culture concerns. Additionally, Ilya Sutskever's departure is mentioned in relation to the Superalignment team disbanding, though his specific stated concerns are less explicitly detailed ('later expressed regret for his role in the removal' of Altman). This constitutes at least one documented senior departure citing safety/ethics concerns, meeting the score of 2.

Sources

Sources

  1. 1

Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.