Important Announcement: We're removing image, video, and audio generation — for now.See why
amallo chat Icon

Why We're Removing Image, Video, and Audio Generation from amallo — For Now

March 3, 2026 12 min read
Back to Home

As of today, March 3rd, 2026, our AI image, video, and audio generation apps have been removed from the platform. We want to be upfront about what changed, why we made this call, and where we're headed from here.

We did not make this decision lightly. It came after significant research, serious internal debate, and a genuine reckoning with whether we could offer these capabilities in a way that aligns with who we are and what we want amallo to stand for. The honest answer, right now, is that we can't — and we'd rather tell you that plainly than leave features in place that we can't fully stand behind.

Why We Built amallo — And Why That Matters Here

We built amallo because we believe AI is an incredibly powerful tool with the potential to make the world a better place for everyone. We also understand that without ethical guardrails and standards, it has the potential to destroy some of the most critical parts of our human fabric — including art, and the artists who create it.

With great power comes great responsibility, and as an AI platform, we have a responsibility to make sure we're operating ethically — for the sake of the people using our platform, for the sake of the people affected by it, and for the sake of the future of AI itself.

Right now, that responsibility is clear: to protect artists, to reduce harm, and to refuse to participate in practices that exploit the people whose creativity makes these technologies possible in the first place. That belief is what led us to take a hard look at our image, video, and audio generation offerings — and ultimately to remove them.

What We Removed and Why

As of today, we have removed all AI image, video, and audio generation apps and their associated models from amallo.

These were real features that real users were using, and we understand that removing them creates friction. But after conducting a thorough ethical review of every provider and model available in these spaces, we reached a conclusion we couldn't ignore:

There are not currently any image, video, or audio generation models on the market that meet our ethical standard.

There are a few that come close, and we will continue to explore those options — and new ones as they emerge — but none of them check every box of our ethical framework. Below is what we look for when evaluating a generative media platform for ethical viability.

Our Ethical Framework for Generative Media AI

When evaluating whether to offer any generative media capability, we hold every provider and model against five core criteria:

1. Training Data Ethics & Consent

The most fundamental question: where did the training data come from, and did the people who created it agree to have it used this way?

The majority of generative AI models — including many of the most well-known and widely used — were trained on data scraped from the internet without the knowledge or consent of the artists, photographers, musicians, and creators whose work was used. This isn't a technicality. It means that the creative output of countless individuals was taken without permission and used to build commercial products that now compete with those same individuals.

We consider this a foundational ethical problem, not a minor compliance issue.

2. Artist Compensation

Even where training data was technically licensed, whether the human creators behind that data were fairly compensated matters deeply to us. Licensing agreements between large platforms and AI companies don't always mean that the individual artists, photographers, or musicians at the end of that chain received meaningful compensation — or any at all.

We look for providers with transparent, direct, and ongoing compensation mechanisms for the creators whose work made their models possible. These are exceedingly rare.

3. Transparency

We look at whether providers are open about what their models were trained on, how compensation works, and what the limitations of their ethical practices are.

Opacity on training data — especially when combined with active resistance to disclosure in legal proceedings — is a meaningful red flag, and one we frequently encountered in our research.

4. Independent Verification

We give significant weight to whether providers can back their ethical claims with something more than self-reported assurances — whether their practices have been independently examined and confirmed by an outside party.

The gap between what AI companies claim about their training data and what holds up to external scrutiny is wide, and we're not willing to take marketing language at face value.

5. Displacement vs. Assistance

Finally, we ask whether a given tool is designed to assist human creativity or replace it. Tools that enhance, restore, or augment existing human creative work sit in a meaningfully different ethical position than tools designed to generate commercial-grade content at scale as a substitute for human creative labor.

This distinction matters to us — and it shapes how we evaluate every model we consider.

What We Found

We researched the full landscape of generative image, video, and audio providers — including those most frequently cited as the ethical leaders in their respective spaces.

For image generation, a small number of providers have made genuine efforts toward ethical training data practices — using licensed libraries, pursuing contributor compensation models, and supporting content transparency standards. We were encouraged by what some of these providers are attempting. However, even the strongest options fell short on at least one or more of our criteria — whether through incomplete compensation structures, retroactive consent issues, or an inability to independently verify their claims. None checked every box.

For video generation, the ethical infrastructure is significantly less mature. No current provider meets the standard we'd need to feel confident in. Training data practices in the video space are largely undisclosed, compensation mechanisms are essentially nonexistent, and the displacement risk to filmmakers, editors, and visual artists is significant.

For audio generation, the picture is similarly difficult. The most capable music and voice generation tools are currently subject to major litigation from music industry organizations specifically over non-consensual training data. The few providers attempting more ethical approaches are limited in capability and scope, and none have fully met our ethical standards. Voice cloning, in particular, presents risks we're not comfortable with under any current framework.

Across all three categories, the honest conclusion was the same: we can't continue to offer these in a way we feel good about.

Our Refund Policy

We recognize that some of you purchased amallo specifically to access these capabilities, and that this change affects the value you expected from the platform. That's on us, and we want to make it right.

If you purchased amallo at any point before today, March 3rd, 2026, you are entitled to a full, no-questions-asked refund through April 30th, 2026.

No justification is required. No hoops to jump through. If this change means amallo is no longer the right fit for you, we completely understand. Just reach out to us at support@amallo.ai and we'll take care of you.

What Would Bring These Features Back

This decision is not permanent. We want to be specific about what it would take for us to revisit it:

  • Consent-first training data that can be independently verified — models built on explicitly licensed, opt-in creative work from the ground up
  • Meaningful, transparent artist compensation — ongoing revenue-sharing mechanisms where individual creators can see and verify what they're receiving
  • Verified ethical practices backed by external accountability, not self-reported claims

We're watching this space closely. The landscape is shifting, accountability standards are developing, and we remain hopeful that the industry moves in a better direction. When providers emerge that genuinely meet the bar, we'll be ready — and we'll be transparent about that decision when it comes.

Where We're Focusing Instead

In the meantime, we're directing our full energy toward expanding the capabilities of our chat application and broadening our options for working with large language models. We're also working to strengthen our ethical footprint in this space, and we have some exciting changes coming soon — none of which will result in any reduction of features. We can't wait to share more.

The Broader Point

AI is powerful. That's the whole reason we're here. But power without responsibility isn't a product philosophy we're willing to build on — and right now, the generative media space asks us to benefit from practices we believe cause real harm to real people.

We're not going anywhere — we're just making sure that where we go is somewhere we can be proud of.

Thank you for being part of this journey with us.

— Liam Snack, CEO, amallo.ai

Chat with AI — the right way

100+ text models from the world's best providers. One platform.

Start Chatting