Ethics Report: Meta
Rubric: Organisation v4 · Reviewed 3/22/2026
Minimal ethical infrastructure
Safety & Harm Reduction
11/25Dedicated safety / responsible-use policy
Publishes a dedicated safety/responsible-use policy that is publicly accessible.
Evidence
Meta maintains a dedicated Responsible Use Guide for its Llama language models, published at ai.meta.com/static-resource/responsible-use-guide/. The guide provides specific, enforceable terms including responsible AI considerations, mitigation strategies, and system design principles. Additionally, Meta has published comprehensive responsible AI approach documentation at ai.meta.com/blog/responsible-ai-connect-2024/, detailing safety policies for generative AI products integrated across WhatsApp, Messenger, and Instagram.
Public bug-bounty or red-team program
Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.
Evidence
Meta does not operate a public bug-bounty or red-team program with published results. While the company maintains security practices and transparency initiatives through its Transparency Center, no dedicated public bug-bounty listing or red-team call with published findings is documented in the available sources. The article mentions FAIR's open science approach but does not reference a formal public red-teaming program.
Sources
Published safety evaluation within last 24 months
Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).
Evidence
Meta has published limited safety evaluation materials. The Responsible Use Guide and Connect 2024 safety documentation provide some safety information, but these do not constitute comprehensive quantitative safety benchmarks across multiple harm categories. The sources mention safety considerations and approach but lack detailed quantitative safety evaluation metrics comparable to formal model cards with extensive safety benchmarking.
Documented content-filtering / guardrails
Documents content-filtering/guardrails on production endpoints with user-facing documentation.
Evidence
Meta documents content filtering and guardrails in its safety and responsible use documentation. The article states that Meta uses AI to manage content moderation across platforms, identifying and removing prohibited material in over 100 languages, including hate speech and misinformation. However, detailed documentation explaining what is filtered, why, and how users can report false positives is not comprehensively provided in the publicly available sources.
Documented incident-response process
Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.
Evidence
No public documentation of a dedicated incident-response process with defined timelines or SLAs is available. While Meta maintains safety approaches and reports, the sources do not reference a documented incident-response process specifically for safety incidents with stated response timelines or service level agreements.
Sources
Transparency & Trust
12/25Training data provenance disclosure
Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.
Evidence
Meta discloses general categories of training data sources for its Llama models. The article mentions that Llama models are trained on publicly available data and utilize open-weight approaches, and sources reference training on up to 15 trillion tokens. However, specific dataset names, detailed source identification, or meaningful composition/curation details are not substantively disclosed in the available public sources.
Sources
Meaningful technical documentation for flagship model(s)
Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.
Evidence
Meta provides substantive technical documentation for its flagship Llama models. The ai.meta.com resources page and research publications document architecture details, parameter counts (ranging from 7B to 405B parameters), training approaches (including mixture-of-experts in Llama 4), and model capabilities. The article details training on 15 trillion tokens, multimodal processing, and various architectural innovations, demonstrating comprehensive technical disclosure beyond marketing materials.
Transparency report (takedowns, government requests, etc.)
Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.
Evidence
Meta maintains a comprehensive Transparency Center with regulatory and other transparency reports at transparency.meta.com. The sources reference regular transparency reporting on government requests, content removal, and regulatory matters. The company's commitment to transparent disclosure of legal and regulatory interactions is documented and appears to be maintained within recent timeframes as part of Meta's official transparency infrastructure.
Sources
ToS training data use disclosure with opt-out
ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.
Evidence
Meta's Terms of Service and privacy policy do not contain explicit disclosure of training data use with opt-out mechanisms for user-generated content training. The article states that Meta gathers data from AI interactions to train and refine safety protocols, but the sources do not demonstrate explicit Terms of Service language disclosing this use or providing user opt-out mechanisms. Privacy Progress documentation shows privacy commitments but not specific opt-out for training data use.
Creator/artist content provenance disclosure
Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).
Evidence
No public disclosure exists regarding Meta's use of creative works, artist content, or copyrighted material in its training data. The article does not reference any Meta disclosure of content types, sources, or licensing arrangements for creative works used in Llama model training. This is particularly notable given industry controversy over training data sourcing.
Sources
Human & Creator Impact
7/25Artist/creator opt-out or removal mechanism
Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.
Evidence
Meta does not operate a documented artist/creator opt-out or removal mechanism. While the company asserts commitment to copyright respect, the article and sources do not reference a functional opt-out form, removal process, or integration with tools like Spawning. No published evidence of honoring opt-out requests is available.
Sources
Public licensing or revenue-sharing with creators
Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.
Evidence
No public licensing deals or revenue-sharing arrangements with creators or artists are documented. The article mentions AI Studio for creators to develop custom characters and references 7% conversion improvements for businesses using AI tools, but these are product features rather than licensing partnerships or revenue-sharing agreements with named content creators or entities.
Sources
Provenance/attribution tooling for AI outputs
Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).
Evidence
Meta does not provide production-active provenance or attribution tooling for AI outputs. While the company develops AI systems and products, there is no documented implementation of C2PA metadata, watermarking, or tooling like SynthID for provenance disclosure. No public commitment to a specific provenance standard is referenced in the available sources.
Sources
Workforce impact assessment or commitment
Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).
Evidence
Meta has published statements regarding workforce impact considerations. The article references AI's role in content moderation, recommendation systems, and workforce productivity (noting 7% conversion increases for businesses), and Meta maintains People Practices documentation. However, no comprehensive workforce impact assessment or measurable outcomes are documented; these are primarily product statements rather than dedicated workforce impact programs.
Does NOT claim ownership over user-generated outputs
ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.
Evidence
Meta's Terms of Service grant users full ownership or unrestricted rights to user-generated outputs. The article and sources do not indicate claims of Meta ownership over user outputs; instead, Meta's business model centers on platform use and data collection for service improvement rather than claiming output ownership. Industry standard practice and Meta's emphasis on user content creation (evidenced by AI Studio and creator tools) indicates full user ownership of generated content.
Sources
Governance
11/25Discloses corporate structure, investors, and board
Publicly discloses corporate structure, major investors, and board composition.
Evidence
Meta's corporate structure, board members, and major investors are publicly disclosed. Mark Zuckerberg is publicly disclosed as Founder, Chairman, and CEO (verified at meta.com/media-gallery/executives/). The article details executive leadership including Andrew Bosworth (CTO), Yann LeCun (Chief AI Scientist), and other named officers. Major investments and financial structure are disclosed through SEC filings and official media channels.
Independent ethics/safety advisory board
Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.
Evidence
Meta does not operate a formally independent ethics or safety advisory board. The article references FAIR and internal safety teams led by named executives (Yann LeCun, Jérôme Pesenti), and mentions Meta's co-founding role in the Partnership on AI, but this is an industry collaborative body, not Meta's own independent advisory board. No dedicated external ethics board with published mandate or recommendations is documented.
Sources
Legal corporate structure preserving safety/mission
Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).
Evidence
Meta maintains a standard corporate structure as a public C-corporation (Meta Platforms, Inc.) with no verifiable legal mechanisms such as Benefit Corporation status or capped-profit structures. While the article notes Zuckerberg's majority voting control enabling long-term strategic focus, no special legal provisions preserving safety or mission through corporate filings are documented.
Sources
Public policy engagement or lobbying disclosure
Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.
Evidence
Meta has published policy positions and signed onto frameworks. The article references Meta's co-founding role in the Partnership on Artificial Intelligence to Benefit People and Society in 2016, indicating framework participation. However, comprehensive public policy engagement disclosures, lobbying registrations, or detailed policy position statements are not substantively documented in the available sources.
Sources
No senior departures citing safety/ethics (last 36 months)
No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.
Evidence
Yann LeCun, Meta's Chief AI Scientist since 2013, announced in late 2025 plans to depart Meta to pursue independent research on 'world models,' as documented in multiple sources. While LeCun's departure appears motivated by research direction rather than explicit safety/ethics concerns, it represents one documented senior (VP+) departure. The sources note this was announced as a planned transition rather than a public on-record safety citation, placing this at score 2 (one departure).
Sources
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.
