Alpha
amallo chat Icon
Back to Moonshot AI

Ethics Report: Moonshot AI

Rubric: Organisation v4 · Reviewed 3/22/2026

19/100
Critical

Little to no verifiable ethical commitment

Safety & Harm Reduction

4/25
1.1

Dedicated safety / responsible-use policy

Publishes a dedicated safety/responsible-use policy that is publicly accessible.

0/5
None

Evidence

No dedicated safety or responsible-use policy is publicly documented on Moonshot AI's website or official platforms. The article mentions the company operates within Chinese regulatory frameworks (CAC, 2023 Interim Measures) but does not indicate a public, dedicated policy page with specific enforceable terms and defined prohibited uses.

Sources

1.2

Public bug-bounty or red-team program

Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.

0/5
None

Evidence

No public bug-bounty or red-team program is documented. While a July 2025 red team assessment by security firm SplxAI was conducted on Kimi K2, this was an independent third-party evaluation, not a company-operated public program with published results from the organization itself.

Sources

1.3

Published safety evaluation within last 24 months

Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).

2/5
Limited

Evidence

Limited safety evaluation documentation exists. The article references an independent red-team assessment by SplxAI (July 2025) that reported safety metrics (1.55% security, 4.47% safety on raw model; 59.52% security, 82.70% safety with hardened configs), but this represents limited quantitative safety data and was conducted by a third party rather than published by Moonshot AI itself as comprehensive internal benchmarks.

Sources

1.4

Documented content-filtering / guardrails

Documents content-filtering/guardrails on production endpoints with user-facing documentation.

2/5
Mentioned

Evidence

Content filtering is mentioned but not detailed. The article states that Moonshot AI uses 'hardened configurations involving behavioral anchors and content filters' to improve safety, and that the company operates within CAC regulatory frameworks requiring content safety alignment. However, no detailed documentation explaining what is filtered, why, or how users can report false positives is publicly available.

Sources

1.5

Documented incident-response process

Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.

0/5
None

Evidence

No public incident-response process with SLAs or timelines is documented. While the article mentions privacy policies exist and general operational governance, there is no evidence of a dedicated security reporting mechanism or documented incident response procedure with defined response timelines.

Sources

Transparency & Trust

6/25
2.1

Training data provenance disclosure

Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.

2/5
General categories

Evidence

General categories of training data are disclosed. The article states that Kimi K2.5 is 'a native multimodal model pretrained on 15 trillion mixed visual and text tokens' and that the company mentions using web data and other sources. However, no specific dataset names, sources, composition detail, or filtering/exclusion criteria are disclosed beyond these generic descriptions.

2.2

Meaningful technical documentation for flagship model(s)

Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.

2/5
Basic

Evidence

Basic technical documentation exists. The article provides information on architecture (Mixture-of-Experts with 1 trillion total parameters, 32 billion activated for K2; multimodal for K2.5), context window capabilities (256K to 2M tokens), and operational modes. However, no substantive technical report covering training approach, data curation, or detailed limitation disclosures is referenced or publicly available.

2.3

Transparency report (takedowns, government requests, etc.)

Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.

0/5
None

Evidence

No transparency report covering takedowns, government requests, or content removal disclosures exists. The article does not mention any public transparency report from Moonshot AI regarding government requests, user data disclosures, or content moderation statistics.

Sources

2.4

ToS training data use disclosure with opt-out

ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.

2/5
Explicit, no opt-out

Evidence

The privacy policy explicitly discloses training data use but without opt-out. The article states: 'For users of the Kimi OpenPlatform... the company states it collects prompts, images, and files to optimize its models and understand user preferences' and 'while user content is used for model training.' This is explicit but no opt-out mechanism is documented.

Sources

2.5

Creator/artist content provenance disclosure

Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).

0/5
None

Evidence

No specific disclosure regarding creative or copyrighted content provenance. The article does not document any public disclosure naming content types, sources, or licensing arrangements for creative works used in training. The company's open-weight model release strategy is mentioned but without specific attribution to creative content sources.

Sources

Human & Creator Impact

2/25
3.1

Artist/creator opt-out or removal mechanism

Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.

0/5
None

Evidence

No artist/creator opt-out or removal mechanism is publicly documented. The article does not reference any opt-out form, Spawning integration, email removal process, or evidence of honoring creator removal requests. While open-weight model release is mentioned as supporting safety transparency, no specific mechanism for artists to request content removal exists.

Sources

3.2

Public licensing or revenue-sharing with creators

Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.

0/5
None

Evidence

No public licensing or revenue-sharing arrangements with creators are mentioned. The article documents partnerships with technology firms (Perplexity, Tencent, Cursor, Vercel) but does not mention any announced licensing deals or compensation programs with artists, creators, or content publishers.

Sources

3.3

Provenance/attribution tooling for AI outputs

Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).

0/5
None

Evidence

No provenance or attribution tooling for AI outputs is documented. The article does not mention any production implementation of C2PA metadata, watermarks, SynthID, or other attribution standards for outputs generated by Kimi models.

Sources

3.4

Workforce impact assessment or commitment

Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).

0/5
None

Evidence

No workforce impact assessment or commitment is documented. The article mentions that Moonshot AI's agentic AI tools may impact white-collar labor (programmers, analysts, service reps) but the company itself has not published any assessment, commitment, or program addressing workforce transition or impact mitigation.

Sources

3.5

Does NOT claim ownership over user-generated outputs

ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.

2/5
Broad license retained

Evidence

ToS likely grants some user ownership but lacks explicit clarity. The article does not provide full ToS text, but no evidence suggests Moonshot AI claims exclusive ownership of user outputs. The freemium and API models documented suggest users retain generated content, but explicit ToS language confirming unrestricted user ownership of outputs is not publicly detailed.

Sources

Governance

7/25
4.1

Discloses corporate structure, investors, and board

Publicly discloses corporate structure, major investors, and board composition.

2/5
One disclosed

Evidence

Investors are publicly disclosed but board members are not comprehensively listed. The article identifies key investors (Alibaba, Tencent, Meituan, Xiaohongshu, IDG Capital, HongShan) and founder Yang Zhilin plus co-founders Zhou Xinyu and Wu Yuxin. However, no complete board of directors is publicly documented or findable through official channels.

Sources

4.2

Independent ethics/safety advisory board

Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.

0/5
None

Evidence

No independent ethics or safety advisory board is documented. The article references internal trust & safety work and regulatory compliance with CAC frameworks, but does not mention any formal independent or external ethics/safety board with named members or published recommendations.

Sources

4.3

Legal corporate structure preserving safety/mission

Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).

0/5
Standard structure

Evidence

No verifiable legal mechanism preserving safety or mission is documented. The article describes Moonshot AI as a standard venture-funded startup with standard corporate structure. No evidence of Benefit Corporation status, capped-profit structure, or other legal mechanism limiting profit for safety/mission is mentioned.

Sources

4.4

Public policy engagement or lobbying disclosure

Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.

0/5
None

Evidence

No public policy engagement, lobbying disclosure, or framework participation is documented. The article mentions regulatory compliance with Chinese CAC frameworks but does not reference any public policy positions, framework signings (e.g., AI governance frameworks), or disclosed lobbying activities.

Sources

4.5

No senior departures citing safety/ethics (last 36 months)

No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.

5/5
Clean record

Evidence

No senior departures citing safety/ethics concerns are documented on public record. The article covers founder Yang Zhilin's transition from previous ventures and mentions legal disputes with former shareholders over share ownership, but does not cite any VP-level or senior departures publicly attributable to safety or ethics concerns.

Sources

Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.