Ethics Report: DeepSeek
Rubric: Organisation v4 · Reviewed 3/22/2026
Little to no verifiable ethical commitment
Safety & Harm Reduction
6/25Dedicated safety / responsible-use policy
Publishes a dedicated safety/responsible-use policy that is publicly accessible.
Evidence
DeepSeek has safety commitments through participation in the Artificial Intelligence Safety Commitments (AISC) initiative signed in December 2024 with 16 other Chinese companies. The article states the organization signed commitments requiring red-teaming exercises and transparency regarding model capabilities and limitations. However, no dedicated safety/responsible-use policy page is publicly documented with specific enforceable terms and defined prohibited uses—the commitments are part of a broader industry initiative rather than a standalone DeepSeek policy.
Sources
Public bug-bounty or red-team program
Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.
Evidence
The article makes no mention of a public bug-bounty program or red-team program managed by DeepSeek. While the AISC initiative requires red-teaming, this is not DeepSeek's own program, and no public documentation of a dedicated red-team or bounty program exists in the available sources.
Published safety evaluation within last 24 months
Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).
Evidence
DeepSeek has released model performance data and benchmarks in technical documentation. The article reports internal testing claims (92.5% accuracy on ImageNet, 88.7% on SQuAD for R1; 88.5% on MMLU, 82.6% on HumanEval for V3). However, the article explicitly states 'independent third-party evaluations to validate these specific performance metrics are not yet widely available.' No comprehensive quantitative safety evaluation across multiple harm categories with third-party audit has been published.
Documented content-filtering / guardrails
Documents content-filtering/guardrails on production endpoints with user-facing documentation.
Evidence
The article documents that 'DeepSeek implements safety guardrails for its consumer-facing chat interfaces and developer APIs to prevent the generation of harmful or illegal content.' The text notes guardrails have been subject to external scrutiny and mentions an incident where an earlier version was susceptible to jailbreaking. The organization is described as committed to 'ongoing red-teaming' in response to incidents. However, no detailed documentation is provided explaining what is filtered, why, or how users can report false positives.
Sources
Documented incident-response process
Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.
Evidence
The article does not document any public incident-response process, dedicated reporting mechanism, or security email for users to report safety concerns. No SLA or response timeline is mentioned. The organization's engagement with safety issues appears reactive rather than proactive with a documented process.
Transparency & Trust
4/25Training data provenance disclosure
Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.
Evidence
The article states DeepSeek-V3 was 'trained on a corpus of 14.8 trillion tokens' and that the laboratory integrates 'advanced quantitative methods' and uses 'data sharing' with industry partners. However, specific dataset names, sources, or detailed curation information are not disclosed. The article notes that 'the specific identities of these corporate partners have not been publicly disclosed.' This represents only general categories (tokens, web/quantitative data) without substantive provenance details.
Meaningful technical documentation for flagship model(s)
Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.
Evidence
DeepSeek has published some technical details: parameter count (671 billion for R1 and V3), training efficiency metrics (2.788 million H800 GPU hours for V3), and architectural innovations (MLA, MoE design). The article documents architectural contributions including 'Multi-head Latent Attention (MLA)' and 'Deep Expert Parallelism (DeepEP).' However, no comprehensive technical report is published covering full architecture, scale, training approach, AND limitation disclosures in one substantive document.
Transparency report (takedowns, government requests, etc.)
Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.
Evidence
No transparency report is mentioned in the article regarding takedowns, government requests, censorship incidents, or any similar disclosure. The article documents safety incidents (e.g., jailbreaking incidents) but not a formal transparency report covering government requests or content removal.
ToS training data use disclosure with opt-out
ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.
Evidence
The article does not provide information about DeepSeek's Terms of Service regarding training data use. No explicit statement is documented about whether user data is used for training, and no opt-out mechanism is mentioned. The ToS language regarding data use is not disclosed.
Creator/artist content provenance disclosure
Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).
Evidence
The article discusses concerns that 'DeepSeek may have incorporated outputs from Western models, such as OpenAI's GPT-4, into its training data,' but no specific disclosure by DeepSeek about creative content, artist sources, or licensing arrangements is documented. No proactive disclosure regarding creative works or copyright sources is mentioned.
Human & Creator Impact
0/25Artist/creator opt-out or removal mechanism
Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.
Evidence
The article does not mention any artist or creator opt-out mechanism, removal process, or commitment to honor removal requests. No reference to Spawning integration, email removal process, or any documented evidence of honoring creator requests is provided.
Public licensing or revenue-sharing with creators
Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.
Evidence
The article mentions a partnership with BMW to 'integrate artificial intelligence technology from Chinese startup DeepSeek into its vehicle models sold in China,' but this is an integration/licensing arrangement for DeepSeek's technology, not a licensing or revenue-sharing deal with creators or content providers. No public partnerships with artists, creators, or content holders for compensation are documented.
Sources
Provenance/attribution tooling for AI outputs
Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).
Evidence
The article does not mention any tooling, commitment, or implementation of provenance/attribution systems for AI outputs. No discussion of C2PA, watermarks, SynthID, or similar provenance tracking mechanisms is included.
Workforce impact assessment or commitment
Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).
Evidence
The article does not document any workforce impact assessment, report on labor effects, or specific commitment to addressing workforce displacement. While DeepSeek describes its own talent acquisition strategy and internal culture, no assessment of broader workforce impact is mentioned.
Does NOT claim ownership over user-generated outputs
ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.
Evidence
The article does not discuss DeepSeek's Terms of Service regarding ownership of user-generated outputs. No statement is provided about whether users retain full ownership or whether DeepSeek claims rights to outputs. This critical governance point is not addressed in available public documentation.
Governance
9/25Discloses corporate structure, investors, and board
Publicly discloses corporate structure, major investors, and board composition.
Evidence
The article discloses that DeepSeek is 'founded and led by Liang Wenfeng' and provides details on his background and role. It also discloses that DeepSeek is 'fully funded by High-Flyer, an investment firm that managed approximately Rmb60 billion (US$8 billion) as of 2023' with an initial $50 million investment. However, no board members beyond the founder are named, and the broader investor base (beyond High-Flyer) is not disclosed. Only the primary investor structure is disclosed, not a comprehensive board or full investor list.
Independent ethics/safety advisory board
Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.
Evidence
The article describes an internal trust and safety focus but makes no mention of an independent ethics or safety advisory board. The organization's engagement with the AISC represents participation in industry self-regulation, not an independent advisory body. No external board with independent members is documented.
Sources
Legal corporate structure preserving safety/mission
Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).
Evidence
The article states DeepSeek is 'a privately held organization' without describing any special legal structure. No mention is made of Benefit Corporation status, capped-profit structure, or other legal mechanisms to preserve safety/mission beyond standard corporate ownership by High-Flyer. No verifiable legal filings or special provisions are documented.
Public policy engagement or lobbying disclosure
Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.
Evidence
The article documents that 'DeepSeek joined 16 other Chinese companies in signing the Artificial Intelligence Safety Commitments (AISC)' in December 2024. This represents participation in a framework and published policy position on AI safety. However, no active lobbying disclosure, engagement with international policy bodies, or multiple framework partnerships are documented. This is a single framework signature, not comprehensive policy engagement.
Sources
No senior departures citing safety/ethics (last 36 months)
No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.
Evidence
The article contains no public record of senior (VP+) departures citing safety or ethics concerns. While the article documents controversies regarding distillation practices and jailbreaking incidents, no senior departures with on-record safety/ethics citations are mentioned. The organization's founding team and leadership structure remain intact as documented.
Sources
- 1
- 2
- 3
- 4
Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.
