Ethics Report: Z.ai
Rubric: Organisation v4 · Reviewed 3/22/2026
Little to no verifiable ethical commitment
Safety & Harm Reduction
8/25Dedicated safety / responsible-use policy
Publishes a dedicated safety/responsible-use policy that is publicly accessible.
Evidence
Z.ai has published 'Platform Rules' and 'Terms of Use' that establish safety boundaries and ethical guidelines for its GLM services. However, these are generic contractual terms administered through its international distributor rather than a dedicated safety/responsible-use policy page. The documentation mentions adherence to international frameworks (UNESCO, OECD) but lacks specific, enforceable prohibited-use terms or detailed safety commitments beyond standard API terms.
Sources
Public bug-bounty or red-team program
Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.
Evidence
No public bug-bounty program or red-team initiative is documented in the article or publicly associated with Z.ai. There is no mention of a HackerOne listing, public red-team call, or any formal security research program. The organization does not appear to have a documented vulnerability disclosure or community security testing program.
Sources
- •z.ai
Published safety evaluation within last 24 months
Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).
Evidence
Z.ai's safety documentation includes mention of guardrails within the GLM-5 model to prevent harmful content generation, and the organization references alignment with international frameworks (UNESCO, OECD, NIST). However, no formal published safety evaluation report, model card with quantitative safety benchmarks, or third-party audit is documented. Independent evaluations mention technical performance on coding/math tasks but not comprehensive safety benchmarking.
Sources
Documented content-filtering / guardrails
Documents content-filtering/guardrails on production endpoints with user-facing documentation.
Evidence
The article states that Z.ai implements 'guardrails' within its model architecture (specifically mentioned for GLM-5) and references 'safety filters and output constraints to prevent the generation of prohibited or harmful content.' However, no detailed documentation is provided explaining what specific content is filtered, why, or how users can report false positives. This constitutes a mention of filters but lacks detailed documentation.
Sources
Documented incident-response process
Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.
Evidence
Z.ai's 'Additional Terms for API Services' and termination clauses indicate a reporting mechanism (users must comply with guidelines or face service termination). The organization 'asserts the right to terminate access' for policy violations. However, no dedicated abuse reporting form, security email, or documented response process with stated timelines or SLAs is mentioned. This constitutes a basic reporting mechanism without full SLA documentation.
Sources
Transparency & Trust
4/25Training data provenance disclosure
Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.
Evidence
The article describes Z.ai's research philosophy as emphasizing 'high-efficiency scaling laws' and 'sophisticated data curation' but provides no specific disclosure of training data sources. The organization focuses on 'autonomous agentic behavior' and 'synthetic data generation' to address 'data exhaustion,' but training data provenance beyond general categories is not disclosed. No specific datasets, sources, or composition details are provided.
Sources
Meaningful technical documentation for flagship model(s)
Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.
Evidence
Z.ai has published basic technical information about its flagship models: GLM-4.5 features 355B total parameters with 32B active parameters, GLM-5-Turbo has a context window of 202.8K tokens and 131.1K max output. The article mentions a 'thinking mode' for reasoning and architectural unification of reasoning/coding/agentic capabilities. However, no substantive technical report covering training approach, limitations, or detailed architecture is documented. Technical detail is limited to parameter counts and interface features.
Sources
Transparency report (takedowns, government requests, etc.)
Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.
Evidence
No transparency report regarding government requests, content takedowns, or legal demands is mentioned in the article. The organization has not published any report covering disclosure categories typical of transparency reports (government requests, DMCA takedowns, etc.). No comprehensive or limited transparency report dated within any timeframe is documented.
Sources
- •z.ai
ToS training data use disclosure with opt-out
ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.
Evidence
The article's discussion of Terms of Use focuses on user obligations and data protection responsibilities but does not explicitly disclose whether Z.ai uses user inputs for training its models. While 'Additional Terms for API Services' are mentioned for data protection, there is no explicit statement regarding training data use, and no opt-out mechanism or commitment to never use user data for training is documented.
Sources
Creator/artist content provenance disclosure
Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).
Evidence
No disclosure regarding the use of creative or copyrighted content (artwork, music, literature) in Z.ai's training is provided. While the article mentions vision and video generation models (CogView-4, CogVideoX-3, Vidu series) and notes concerns from observers about disruption to creative workflows, there is no specific disclosure naming creative content types, sources, or licensing arrangements for such content in the models.
Sources
Human & Creator Impact
0/25Artist/creator opt-out or removal mechanism
Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.
Evidence
No artist or creator opt-out mechanism, removal form, or process is documented. The article mentions that Z.ai's engagement with the creative community is 'defined by its usage policies' which prohibit certain content types, but no dedicated mechanism for creators to request removal of their work from training data or outputs is described. No evidence of honoring removal requests is provided.
Sources
Public licensing or revenue-sharing with creators
Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.
Evidence
No public licensing deals, partnerships, or revenue-sharing arrangements with creators or artists are documented. The article does not mention any announced licensing agreements with named creative entities or structured compensation programs for creators whose work may be used in training. No partnerships with artists' organizations or licensing bodies are disclosed.
Sources
Provenance/attribution tooling for AI outputs
Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).
Evidence
No tooling for provenance or attribution of AI outputs is documented. The article does not mention any implementation or commitment to C2PA metadata, watermarking, SynthID, or other provenance standards. There is no public commitment to any specific provenance standard or production implementation for tracking AI-generated content origin.
Sources
Workforce impact assessment or commitment
Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).
Evidence
No workforce impact assessment or commitment regarding job displacement or labor market effects is documented. While the article discusses Z.ai's contribution to labor market redistribution in AI research and mentions that RLHF involves 'large, distributed workforces' for data labeling, there is no published statement naming a specific initiative, partner, or program to assess or mitigate workforce impact.
Sources
Does NOT claim ownership over user-generated outputs
ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.
Evidence
The article does not provide explicit disclosure of Z.ai's Terms of Use regarding user output ownership. While data protection clauses are mentioned, there is no statement confirming that users retain full ownership of outputs or unrestricted rights to their generated content. The lack of explicit user ownership language in available documentation suggests Z.ai may retain licensing rights over outputs, constituting a potential ownership claim.
Sources
Governance
7/25Discloses corporate structure, investors, and board
Publicly discloses corporate structure, major investors, and board composition.
Evidence
Z.ai has disclosed that its founding executive team includes Barret Zoph (principal/formerly OpenAI VP Research), Luke Metz (co-founder, key researcher), and Liam Fedus (co-founder). However, the article explicitly states 'Z.ai has not publicly disclosed its full board of directors' and describes governance as following 'a traditional private corporate model' without naming additional board members. Major investors (Thrive Capital, SoftBank) are mentioned but not a comprehensive investor list. Only founders are disclosed, not independent board members.
Sources
Independent ethics/safety advisory board
Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.
Evidence
No independent ethics or safety advisory board is documented. The article describes Z.ai's governance structure as 'a flattened technical structure designed to facilitate rapid iteration' with no mention of an external advisory board. The safety governance section focuses on internal policies and contractual frameworks without reference to any independent body. No such advisory body with named members or published mandate is mentioned.
Legal corporate structure preserving safety/mission
Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).
Evidence
Z.ai is incorporated as Z Research Inc., described as a 'traditional C-corp/LLC-style private corporate model' with no special provisions mentioned. The article does not describe any verifiable legal mechanism such as Benefit Corporation status, capped-profit structure, or mission-preserving legal constraints. While safety governance is discussed, it is entirely contractual/policy-based rather than legally embedded in corporate structure.
Sources
Public policy engagement or lobbying disclosure
Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.
Evidence
No public policy engagement, signed frameworks, or lobbying disclosure is documented. The article notes that Z.ai's operational standards 'intersect with' international frameworks (UNESCO, OECD) and align with NIST/ISO standards, but this represents alignment with existing frameworks rather than Z.ai's active participation, signed commitments, or public policy positions. No disclosed lobbying activity or policy advocacy is mentioned.
Sources
No senior departures citing safety/ethics (last 36 months)
No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.
Evidence
No senior (VP+) departures citing safety or ethics concerns are documented in the article. The organization was founded by a departure from OpenAI (Zoph, Metz, Fedus) in September 2024, but this was characterized as these founders leaving to start their own company, not departures citing safety concerns at Z.ai. The company is only months old (late 2024/early 2025), and no departures of senior staff with on-record safety/ethics statements are mentioned. There is a clean record of no such departures.
Sources
Sources
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.
