Ethics Report: Google
Rubric: Organisation v4 · Reviewed 3/22/2026
Some effort but significant gaps
Safety & Harm Reduction
17/25Dedicated safety / responsible-use policy
Publishes a dedicated safety/responsible-use policy that is publicly accessible.
Evidence
Google published comprehensive AI Principles in June 2018 and maintains detailed safety and ethics policies. The company has a dedicated Secure AI Framework (SAIF) with six core elements addressing security, risk management, and privacy. The framework includes specific prohibitions on weapons, surveillance violating international norms, and technology causing overall harm. Google's policies explicitly define prohibited uses and are enforceable through internal governance structures.
Public bug-bounty or red-team program
Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.
Evidence
Google operates a documented red-teaming program for AI safety testing. The company published 'How Google Does It: Building an effective AI red team' detailing red-teaming methodologies. Google employs safety experts who simulate adversarial attacks to identify model vulnerabilities. The company also conducts socio-technical evaluations and uses digital watermarking (SynthID) to identify AI-generated content, with results published in their responsible AI documentation.
Published safety evaluation within last 24 months
Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).
Evidence
Google publishes safety evaluations through its Responsible Generative AI Toolkit with quantitative safety benchmarks. The company conducts model evaluations addressing multiple harm categories. However, evidence of comprehensive third-party independent audits of Gemini models within the last 24 months is not explicitly documented in the provided sources. Google's own evaluations are published but third-party audit verification is unclear.
Documented content-filtering / guardrails
Documents content-filtering/guardrails on production endpoints with user-facing documentation.
Evidence
Google mentions content filtering and guardrails in its documentation but detailed explanation is limited. The Gemini image generation feature was paused in February 2024 after producing inaccurate images due to overly aggressive safety tuning for diversity. Google acknowledged the guardrails issue but detailed documentation explaining specific filters, why they exist, and user reporting mechanisms for false positives is not comprehensively provided in available sources.
Sources
Documented incident-response process
Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.
Evidence
Google has a dedicated abuse reporting mechanism and security protocols through its support systems. The company operates internal committees reviewing products for compliance with AI Principles. However, publicly documented incident-response processes with stated Service Level Agreements (SLAs) or response timelines are not explicitly detailed in the provided sources, meeting the 'Reporting exists' but not 'Full SLA' standard.
Sources
Transparency & Trust
11/25Training data provenance disclosure
Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.
Evidence
Google discloses that its Gemini models are trained on 'web data, books, and code' in general terms. The company provides basic categorical information about training data sources but does not publish specific dataset names, detailed curation information, or filtering/exclusion criteria beyond general categories. This meets the 'General categories' standard but lacks the specificity required for higher scores.
Sources
Meaningful technical documentation for flagship model(s)
Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.
Evidence
Google publishes substantive technical documentation for Gemini through the Google AI Developer documentation site (ai.google.dev). This includes architecture details, model capabilities, training approaches, and documented limitations. The company provides comprehensive responsible AI guidelines covering model evaluation, safety testing methodologies, and technical specifications demonstrating significant technical depth beyond marketing materials.
Sources
Transparency report (takedowns, government requests, etc.)
Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.
Evidence
Google publishes transparency reports on government requests and content takedowns; however, the article does not provide evidence of a recent comprehensive report dated within the last 24 months. The company has published reports historically, but currency and comprehensiveness of recent filings are not documented in the provided sources, limiting scoring to 'Limited or outdated.'
Sources
ToS training data use disclosure with opt-out
ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.
Evidence
Google's Terms of Service include explicit statements about data collection and AI training use, but no comprehensive opt-out mechanism is documented. Users cannot opt out of training data usage across all Google services. The company offers some privacy controls (Incognito mode, Location History settings) but these have proven ineffective at preventing tracking, as documented in privacy disputes. Explicit language exists but opt-out mechanism does not meet the standard.
Sources
Creator/artist content provenance disclosure
Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).
Evidence
Google does not provide specific disclosure regarding creative or copyrighted content used in training Gemini models. The article mentions general 'web data, books, and code' but there is no specific naming of content types, sources, licensing arrangements, or acknowledgment of creative works used. No evidence of general acknowledgment regarding artist or creator content is provided in the sources.
Sources
Human & Creator Impact
8/25Artist/creator opt-out or removal mechanism
Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.
Evidence
No documented creator or artist opt-out or removal mechanism is described in the provided sources. While Google has faced criticism from creators and has been involved in various licensing disputes, there is no evidence of a formal opt-out process (such as Spawning integration or email removal procedures) specifically for training data removal or artist content exclusion from AI models.
Sources
Public licensing or revenue-sharing with creators
Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.
Evidence
Google has announced partnerships with news organizations regarding content and revenue sharing related to its news platforms, and YouTube has revenue-sharing arrangements with creators. However, there is limited evidence of specific licensing deals or structured compensation arrangements specifically for AI training data or creative content rights. YouTube's creator monetization is documented, meeting the 'at least one deal' standard but not multiple arrangements.
Sources
Provenance/attribution tooling for AI outputs
Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).
Evidence
Google has committed to provenance tooling through SynthID, its digital watermarking system for identifying AI-generated content. The company has made public commitments to this specific standard. However, evidence of active production implementation across all Google outputs (such as visible watermarking or metadata on Gemini-generated content) is not comprehensively documented in the provided sources, limiting scoring to 'Committed' rather than 'Production active.'
Sources
Workforce impact assessment or commitment
Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).
Evidence
Google has published statements regarding workforce impact through its 'Grow with Google' and career development initiatives. The company operates 'Applied Digital Skills' programs and Career Certificates in high-growth fields. However, these are workforce development initiatives rather than formal impact assessments of AI on employment displacement or worker transition. A specific statement naming initiatives exists, but measurable workforce impact assessment data is not documented.
Sources
Does NOT claim ownership over user-generated outputs
ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.
Evidence
Google's Terms of Service grant users broad usage rights to content they generate using Google services, but the company retains licenses for various purposes including improving services and AI training. Users do not receive full unrestricted ownership. The ToS does not claim exclusive ownership but retains significant license rights, meeting the 'Broad license retained' standard rather than full user ownership.
Sources
Governance
14/25Discloses corporate structure, investors, and board
Publicly discloses corporate structure, major investors, and board composition.
Evidence
Google's corporate structure is fully disclosed. The company operates as a subsidiary of Alphabet Inc., with publicly disclosed board members and executive leadership. The Alphabet board composition is available through SEC filings. Major investors and shareholding structure are publicly documented, including the dual-class share system concentrating voting power with founders Larry Page and Sergey Brin. Both board and investor information are readily accessible through official channels.
Sources
Independent ethics/safety advisory board
Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.
Evidence
Google has internal safety and trust teams, but the independence and structure of external ethics oversight is unclear. The company's handling of its Ethical AI team—with the departures of Timnit Gebru and Margaret Mitchell over research restrictions—raises questions about the independence of its ethics governance. While Google references ethics committees reviewing products, these appear to be internal structures without clear external independence, limiting the score to 'Exists, unclear.'
Legal corporate structure preserving safety/mission
Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).
Evidence
Google operates as a standard public corporation (C-corp) subsidiary of Alphabet Inc. with conventional corporate structure. The company is a publicly traded corporation with no documented special legal mechanisms (such as Benefit Corporation status or capped-profit structures) designed to preserve safety or mission in corporate governance. The corporate structure is standard with no verifiable legal mechanisms protecting mission/safety beyond conventional fiduciary duties.
Sources
Public policy engagement or lobbying disclosure
Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.
Evidence
Google actively engages in public policy and lobbying. The company has published policy positions on artificial intelligence, signed onto multiple frameworks including AI safety initiatives, and maintains active engagement with regulators. Google discloses its involvement in policy discussions and has testified before Congress on AI regulation. The company participates in standards-setting organizations and has disclosed AI-related policy advocacy.
No senior departures citing safety/ethics (last 36 months)
No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.
Evidence
Timnit Gebru, co-lead of Google's Ethical AI team, departed in December 2020 after stating she was terminated following disagreement over a research paper on LLM environmental impact and bias. This represents one documented senior departure publicly citing safety/ethics concerns. Margaret Mitchell was subsequently terminated, but her departure context centered on alleged security violations rather than primary ethics concern statement. This meets the 'One departure' standard.
Sources
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.
