Alpha
amallo chat Icon
Back to Meta

Ethics Report: Meta

Rubric: Organisation v4 · Reviewed 3/22/2026

41/100
Weak

Minimal ethical infrastructure

Safety & Harm Reduction

11/25
1.1

Dedicated safety / responsible-use policy

Publishes a dedicated safety/responsible-use policy that is publicly accessible.

5/5
Full

Evidence

Meta maintains a dedicated Responsible Use Guide for its Llama language models, published at ai.meta.com/static-resource/responsible-use-guide/. The guide provides specific, enforceable terms including responsible AI considerations, mitigation strategies, and system design principles. Additionally, Meta has published comprehensive responsible AI approach documentation at ai.meta.com/blog/responsible-ai-connect-2024/, detailing safety policies for generative AI products integrated across WhatsApp, Messenger, and Instagram.

1.2

Public bug-bounty or red-team program

Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.

2/5
Exists

Evidence

Meta does not operate a public bug-bounty or red-team program with published results. While the company maintains security practices and transparency initiatives through its Transparency Center, no dedicated public bug-bounty listing or red-team call with published findings is documented in the available sources. The article mentions FAIR's open science approach but does not reference a formal public red-teaming program.

1.3

Published safety evaluation within last 24 months

Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).

2/5
Limited

Evidence

Meta has published limited safety evaluation materials. The Responsible Use Guide and Connect 2024 safety documentation provide some safety information, but these do not constitute comprehensive quantitative safety benchmarks across multiple harm categories. The sources mention safety considerations and approach but lack detailed quantitative safety evaluation metrics comparable to formal model cards with extensive safety benchmarking.

1.4

Documented content-filtering / guardrails

Documents content-filtering/guardrails on production endpoints with user-facing documentation.

2/5
Mentioned

Evidence

Meta documents content filtering and guardrails in its safety and responsible use documentation. The article states that Meta uses AI to manage content moderation across platforms, identifying and removing prohibited material in over 100 languages, including hate speech and misinformation. However, detailed documentation explaining what is filtered, why, and how users can report false positives is not comprehensively provided in the publicly available sources.

1.5

Documented incident-response process

Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.

0/5
None

Evidence

No public documentation of a dedicated incident-response process with defined timelines or SLAs is available. While Meta maintains safety approaches and reports, the sources do not reference a documented incident-response process specifically for safety incidents with stated response timelines or service level agreements.

Transparency & Trust

12/25
2.1

Training data provenance disclosure

Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.

2/5
General categories

Evidence

Meta discloses general categories of training data sources for its Llama models. The article mentions that Llama models are trained on publicly available data and utilize open-weight approaches, and sources reference training on up to 15 trillion tokens. However, specific dataset names, detailed source identification, or meaningful composition/curation details are not substantively disclosed in the available public sources.

2.2

Meaningful technical documentation for flagship model(s)

Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.

5/5
Substantive

Evidence

Meta provides substantive technical documentation for its flagship Llama models. The ai.meta.com resources page and research publications document architecture details, parameter counts (ranging from 7B to 405B parameters), training approaches (including mixture-of-experts in Llama 4), and model capabilities. The article details training on 15 trillion tokens, multimodal processing, and various architectural innovations, demonstrating comprehensive technical disclosure beyond marketing materials.

2.3

Transparency report (takedowns, government requests, etc.)

Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.

5/5
Comprehensive, recent

Evidence

Meta maintains a comprehensive Transparency Center with regulatory and other transparency reports at transparency.meta.com. The sources reference regular transparency reporting on government requests, content removal, and regulatory matters. The company's commitment to transparent disclosure of legal and regulatory interactions is documented and appears to be maintained within recent timeframes as part of Meta's official transparency infrastructure.

2.4

ToS training data use disclosure with opt-out

ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.

0/5
Vague or absent

Evidence

Meta's Terms of Service and privacy policy do not contain explicit disclosure of training data use with opt-out mechanisms for user-generated content training. The article states that Meta gathers data from AI interactions to train and refine safety protocols, but the sources do not demonstrate explicit Terms of Service language disclosing this use or providing user opt-out mechanisms. Privacy Progress documentation shows privacy commitments but not specific opt-out for training data use.

2.5

Creator/artist content provenance disclosure

Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).

0/5
None

Evidence

No public disclosure exists regarding Meta's use of creative works, artist content, or copyrighted material in its training data. The article does not reference any Meta disclosure of content types, sources, or licensing arrangements for creative works used in Llama model training. This is particularly notable given industry controversy over training data sourcing.

Human & Creator Impact

7/25
3.1

Artist/creator opt-out or removal mechanism

Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.

0/5
None

Evidence

Meta does not operate a documented artist/creator opt-out or removal mechanism. While the company asserts commitment to copyright respect, the article and sources do not reference a functional opt-out form, removal process, or integration with tools like Spawning. No published evidence of honoring opt-out requests is available.

Sources

3.2

Public licensing or revenue-sharing with creators

Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.

0/5
None

Evidence

No public licensing deals or revenue-sharing arrangements with creators or artists are documented. The article mentions AI Studio for creators to develop custom characters and references 7% conversion improvements for businesses using AI tools, but these are product features rather than licensing partnerships or revenue-sharing agreements with named content creators or entities.

Sources

3.3

Provenance/attribution tooling for AI outputs

Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).

0/5
None

Evidence

Meta does not provide production-active provenance or attribution tooling for AI outputs. While the company develops AI systems and products, there is no documented implementation of C2PA metadata, watermarking, or tooling like SynthID for provenance disclosure. No public commitment to a specific provenance standard is referenced in the available sources.

Sources

3.4

Workforce impact assessment or commitment

Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).

2/5
Statement only

Evidence

Meta has published statements regarding workforce impact considerations. The article references AI's role in content moderation, recommendation systems, and workforce productivity (noting 7% conversion increases for businesses), and Meta maintains People Practices documentation. However, no comprehensive workforce impact assessment or measurable outcomes are documented; these are primarily product statements rather than dedicated workforce impact programs.

3.5

Does NOT claim ownership over user-generated outputs

ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.

5/5
Full user ownership

Evidence

Meta's Terms of Service grant users full ownership or unrestricted rights to user-generated outputs. The article and sources do not indicate claims of Meta ownership over user outputs; instead, Meta's business model centers on platform use and data collection for service improvement rather than claiming output ownership. Industry standard practice and Meta's emphasis on user content creation (evidenced by AI Studio and creator tools) indicates full user ownership of generated content.

Governance

11/25
4.1

Discloses corporate structure, investors, and board

Publicly discloses corporate structure, major investors, and board composition.

5/5
Both disclosed

Evidence

Meta's corporate structure, board members, and major investors are publicly disclosed. Mark Zuckerberg is publicly disclosed as Founder, Chairman, and CEO (verified at meta.com/media-gallery/executives/). The article details executive leadership including Andrew Bosworth (CTO), Yann LeCun (Chief AI Scientist), and other named officers. Major investments and financial structure are disclosed through SEC filings and official media channels.

4.2

Independent ethics/safety advisory board

Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.

2/5
Exists, unclear

Evidence

Meta does not operate a formally independent ethics or safety advisory board. The article references FAIR and internal safety teams led by named executives (Yann LeCun, Jérôme Pesenti), and mentions Meta's co-founding role in the Partnership on AI, but this is an industry collaborative body, not Meta's own independent advisory board. No dedicated external ethics board with published mandate or recommendations is documented.

4.3

Legal corporate structure preserving safety/mission

Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).

0/5
Standard structure

Evidence

Meta maintains a standard corporate structure as a public C-corporation (Meta Platforms, Inc.) with no verifiable legal mechanisms such as Benefit Corporation status or capped-profit structures. While the article notes Zuckerberg's majority voting control enabling long-term strategic focus, no special legal provisions preserving safety or mission through corporate filings are documented.

4.4

Public policy engagement or lobbying disclosure

Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.

2/5
Framework or positions

Evidence

Meta has published policy positions and signed onto frameworks. The article references Meta's co-founding role in the Partnership on Artificial Intelligence to Benefit People and Society in 2016, indicating framework participation. However, comprehensive public policy engagement disclosures, lobbying registrations, or detailed policy position statements are not substantively documented in the available sources.

4.5

No senior departures citing safety/ethics (last 36 months)

No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.

2/5
One departure

Evidence

Yann LeCun, Meta's Chief AI Scientist since 2013, announced in late 2025 plans to depart Meta to pursue independent research on 'world models,' as documented in multiple sources. While LeCun's departure appears motivated by research direction rather than explicit safety/ethics concerns, it represents one documented senior (VP+) departure. The sources note this was announced as a planned transition rather than a public on-record safety citation, placing this at score 2 (one departure).

Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.