Ethics Report: Black Forest Labs
Rubric: Organisation v4 · Reviewed 4/1/2026
Minimal ethical infrastructure
Safety & Harm Reduction
19/25Dedicated safety / responsible-use policy
Publishes a dedicated safety/responsible-use policy that is publicly accessible.
Evidence
Black Forest Labs maintains a dedicated Responsible AI Development Policy published at bfl.ai/legal/responsible-ai-development-policy with specific, enforceable terms. The organisation also publishes a Usage Policy at bfl.ai/legal/usage-policy that explicitly prohibits unlawful impersonation, biometric processing, military surveillance, and political campaigning/lobbying. These are comprehensive, dedicated policy pages beyond generic statements.
Sources
Public bug-bounty or red-team program
Operates or funds a public bug-bounty / red-team program. Internal-only programs score 0.
Evidence
Black Forest Labs has a documented red-team program with published results. The organisation partnered with the security firm Alice for adversarial red teaming, with a public case study available. According to the evidence, 'subject matter experts crafted hundreds of adversarial prompts to probe policy boundaries, with the resulting findings used to inform model retraining and safety tuning.' The case study is publicly documented on Alice's website and demonstrates completed red-team rounds with published findings.
Sources
Published safety evaluation within last 24 months
Published safety evaluation/audit/model card within last 24 months with quantitative benchmarks on harmful outputs (bias, toxicity, hallucination, etc.).
Evidence
Black Forest Labs has published safety-related information within technical reports and red-teaming documentation, but comprehensive quantitative safety benchmarks across multiple harm categories are not evident in the available sources. The article mentions red teaming focused on NCII and child safety risks, and collaboration with the Internet Watch Foundation on CSAM filtering, but does not cite comprehensive quantitative safety evaluation reports published in the last 24 months. The safety work described is real but limited in documented scope.
Documented content-filtering / guardrails
Documents content-filtering/guardrails on production endpoints with user-facing documentation.
Evidence
Black Forest Labs has detailed documentation of content filtering and guardrails. The organisation employs moderation layers on commercial API models that filter user prompts, image uploads, and generated outputs to block unlawful content. For open-weight models, the Usage Policy specifies prohibited uses. The article states the lab 'utilizes proprietary filtering technology to remove other categories of unsafe content during the pre-training phase' and collaborates with the Internet Watch Foundation (IWF) for CSAM identification. This represents detailed documentation explaining what is filtered and why.
Documented incident-response process
Documented incident-response process for safety failures with a reporting mechanism. Generic "contact us" alone = 0.
Evidence
Black Forest Labs publishes legal policies including privacy and usage policies at bfl.ai/legal/, indicating dedicated reporting mechanisms exist. However, the publicly available documentation does not detail a comprehensive incident-response process with documented SLAs or stated response timelines. The existence of legal pages suggests reporting infrastructure, but full SLA documentation is not evident in the sources provided.
Sources
Transparency & Trust
7/25Training data provenance disclosure
Publishes training data provenance disclosures identifying sources/types/datasets. "Publicly available data" alone = 0.
Evidence
Black Forest Labs has disclosed general categories of training data but not specific datasets or detailed filtering criteria. The article notes that 'Black Forest Labs has not publicly disclosed the specific datasets used to train the FLUX series,' which is a significant transparency limitation. While the company partners with the Internet Watch Foundation for CSAM filtering and employs proprietary filtering, the specific source datasets, composition, and curation details remain undisclosed, placing this at the 'general categories' level rather than specific datasets or full filtering disclosure.
Sources
Meaningful technical documentation for flagship model(s)
Publishes meaningful technical documentation (system card, tech report, research paper) for flagship model(s) regardless of whether weights are released.
Evidence
Black Forest Labs has published substantive technical documentation for its flagship FLUX models. The article describes detailed technical information including architecture (rectified flow transformer), parameter counts (12B for FLUX.1 [dev], 32B for FLUX.2), the 'flow matching' methodology as an alternative to traditional diffusion, distillation techniques, and specific technical innovations like 'Self-Flow' mechanism. The lab has distributed research papers and technical reports covering architecture, scale, training approach, and capabilities/limitations. This meets the threshold for substantive technical documentation.
Transparency report (takedowns, government requests, etc.)
Publishes a transparency report covering takedowns, government requests, enforcement stats, and/or safety incidents.
Evidence
No transparency report regarding takedowns, government requests, or content removal disclosures is referenced in the provided sources or article content. While Black Forest Labs publishes policy documents and safety information, there is no evidence of a dedicated transparency report covering government requests, legal takedowns, or removal statistics in the style of reports published by platforms like Meta or Google.
Sources
ToS training data use disclosure with opt-out
ToS explicitly states whether user inputs/outputs are used for training, with opt-out mechanism if applicable.
Evidence
The article and sources do not disclose explicit language in Black Forest Labs' ToS regarding whether and how training data from user inputs will be used, nor is there evidence of an opt-out mechanism or explicit never-use commitment. While privacy policy exists at bfl.ai/legal/privacy-policy, the sources do not provide specific language addressing training data use or user opt-out rights for model improvement purposes.
Sources
Creator/artist content provenance disclosure
Discloses training data provenance specifically for creator/artist content (copyrighted or artist-created works).
Evidence
Black Forest Labs has not published specific disclosure regarding creative or copyrighted content used in training. The article explicitly states 'the creative community has raised ongoing concerns regarding the lack of transparency in the company's training data' and notes 'Black Forest Labs has not publicly disclosed the specific datasets used to train the FLUX series.' While the company releases some open-weight models and has licensing partnerships with Adobe, Canva, and Meta, these do not constitute specific disclosure of content provenance for training data. No naming of specific content types, sources, or licensing arrangements for creative works is evident.
Sources
Human & Creator Impact
7/25Artist/creator opt-out or removal mechanism
Documented artist/creator opt-out or removal mechanism. "We respect copyright" alone = 0.
Evidence
No evidence of a published artist/creator opt-out or removal mechanism is documented in the sources. While Black Forest Labs emphasises transparency and has published policies, there is no documented opt-out form, Spawning integration, or email removal process for artists/creators to request their work be excluded from or removed from training data. The article notes concerns about copyright and lack of creator compensation mechanisms.
Sources
Public licensing or revenue-sharing with creators
Public licensing agreements or revenue-sharing partnerships with creators/publishers/media organizations.
Evidence
Black Forest Labs has announced multiple publicly named licensing partnerships with major creators and platforms. The organisation has secured a $140 million multi-year agreement with Meta Platforms for generative image technology, and has established partnerships with Adobe, Canva, and Snap (total contract value approximately $300 million). These represent at least one—and in fact multiple—publicly announced licensing deals with named entities, meeting the threshold for this score.
Sources
Provenance/attribution tooling for AI outputs
Provenance/attribution tooling for AI-generated outputs (C2PA, watermarking, metadata tagging).
Evidence
No evidence is provided that Black Forest Labs has implemented production-grade provenance or attribution tooling (such as C2PA metadata, watermarks, SynthID) on its outputs. While the company emphasises transparency in its research philosophy, there is no documented commitment to or active implementation of provenance standards for tracking or attributing AI-generated outputs.
Workforce impact assessment or commitment
Published workforce impact assessment or commitment (labor market effects, reskilling, human-in-the-loop programs).
Evidence
No workforce impact assessment or structured commitment addressing job displacement, retraining, or worker support is referenced in the available sources. While the article notes that some observers express concerns about 'economic stability of entry-level design roles' and that 'the organization claims its tools enhance rather than replace creative professionals,' there is no published report, initiative, partner programme, or measurable commitment from Black Forest Labs to address workforce transition or impact.
Sources
Does NOT claim ownership over user-generated outputs
ToS does NOT claim copyright/exclusive ownership over user-generated outputs. Silent ToS = 0.
Evidence
Black Forest Labs' Intellectual Property Policy (published at bfl.ai/legal/intellectual-property-policy) explicitly grants users full ownership of outputs. The company's user-facing terms grant users full rights to any outputs they generate, ensuring that Black Forest Labs does not claim ownership, exclusive licenses, or any restrictive rights over user-generated content. This represents the full user ownership standard.
Sources
Governance
10/25Discloses corporate structure, investors, and board
Publicly discloses corporate structure, major investors, and board composition.
Evidence
Black Forest Labs publicly discloses both board members and investors. The founding team (Robin Rombach, Patrick Esser, Andreas Blattmann) serves as leadership. The company's investor base is publicly documented across multiple funding rounds, including a16z, NVIDIA, General Catalyst, Salesforce Ventures, Temasek, Northzone, Creandum, Earlybird VC, Bain Capital Ventures, Air Street Capital, Visionaries Club, Canva Ventures, and Figma Ventures. Both board/leadership and major investors are findable through official channels and public reporting.
Independent ethics/safety advisory board
Independent ethics/safety advisory board with verifiably external members. Internal trust & safety team alone = 0.
Evidence
No independent ethics or safety advisory board is referenced in the sources. The article mentions that 'the organization employs a dedicated Head of Responsible Development' and collaborates with external firms like Alice for red teaming and the Internet Watch Foundation for safety tools, but these external partnerships do not constitute an independent advisory board with published mandate, named external members, or published recommendations/reports from a formal governance body.
Sources
Legal corporate structure preserving safety/mission
Corporate structure preserves safety/mission mandate via a legal mechanism (PBC, capped-profit, charter clause).
Evidence
Black Forest Labs is a standard private company with no documented special legal structure preserving safety or mission. The article describes it as 'privately held' with standard venture capital funding, but provides no evidence of Benefit Corporation status, capped-profit structure, perpetual purpose trust, or other verifiable legal mechanism in corporate filings designed to preserve safety or mission alignment.
Sources
Public policy engagement or lobbying disclosure
Public policy engagement or lobbying disclosure: positions on AI regulation, lobbying spend, governance framework signatory.
Evidence
No evidence is provided of Black Forest Labs' public policy engagement, lobbying disclosure, or participation in AI governance frameworks. While the article mentions that the company 'has explicitly aligned its development and distribution strategies with the emerging regulatory standards of the European Union AI Act,' this represents positioning rather than documented public policy positions, framework participation, or lobbying disclosures.
Sources
No senior departures citing safety/ethics (last 36 months)
No publicly documented senior leadership (VP+) departures or whistleblower events citing safety/ethics concerns in the last 36 months. Only on-record statements count.
Evidence
No senior departures citing safety or ethics concerns are documented in the publicly available sources. The founding team remains in place (Robin Rombach as CEO, Patrick Esser, Andreas Blattmann), and no documented public record of VP+ level departures citing safety/ethics concerns exists in the provided sources. The company is recent (founded 2024) and has experienced rapid growth and funding, with no reported senior departures on record.
Sources
Sources
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
Scores are generated using the Amallo Ethics Rubric (Organisation v4) based on publicly verifiable information. Each criterion is scored against defined tiers — only exact tier values are valid. Evidence is sourced from official documentation, research papers, and independent analyses. Scores may change as new information becomes available.
