The Dangers of ‘AI Slop’: Acceptance and Managing Client Expectations in the Time of GenAI
- Rebecca Flynn
- Sep 19, 2025
- 4 min read
Updated: Sep 24, 2025

In conversation with Ashley Keeler, Head of StudioT3D and CTO, Target3D
Ashley, we’ve been hearing this phrase “AI slop” more and more.
What does it mean to you, and why is it something the industry needs to worry about?
“AI slop” is a phrase that has started doing the rounds and, bluntly, it describes what happens when we accept the lowest common denominator of generative output. It is the stuff that looks like it could be useful on the surface, but the more you interrogate it the more you realise it lacks context, intent, and care.
It is technically serviceable, but it is creatively hollow.
The danger is when that becomes the baseline. Once people are exposed to enough of this material, the eye adjusts, and mediocrity starts to look normal. The industry risks lowering its standards, and once the bar drops it is very difficult to push it back up again.

There is also this feedback loop problem. Generative models are increasingly being trained on datasets that already include earlier AI outputs. So you are essentially training AI on AI. It is a bit like digital inbreeding. The material becomes more and more degraded over time. Each generation introduces more noise and less originality. If that is what audiences, or worse, clients, start to think of as standard, then we are on a slippery slope.
And then there is a darker layer beyond blandness, which is disinformation. Deepfakes, synthetic voices, fabricated clips. These are not just theoretical risks anymore. They are here now. Once trust in what you see and hear is eroded, once the public cannot easily tell what is authentic and what is fabricated, the whole ecosystem of media and communication is destabilised. That has consequences not just for entertainment but for politics, journalism, education, everything.
Does that directly affect client conversations? If generative AI is shifting expectations, how should studios and creatives be responding?
We are already having these conversations. A client walks in and says, “I used ChatGPT or Midjourney to try this out and it gave me something instantly. Why can’t you just do that?” And the first instinct is to roll your eyes, but actually it is a critical moment.
The task is not to dismiss the technology outright but to make the distinction clear. You can get a quick synthetic draft, or you can get a crafted, human-led outcome. One is fast but shallow, the other is slower but has depth, rigour, and accountability. That is the education piece.
So we show, rather than just tell. We explain where genAI generated assets fall down, whether that is in accuracy, cultural nuance, or originality. We explain why provenance matters, why copyright and ownership matter, why empathy and intent still matter.
And we make the case that, in the long run, cutting corners creates risk.
It is about positioning ourselves not as gatekeepers resisting the tide, but as partners who understand the tools, understand their limits, and know how to integrate them responsibly into workflows. That way clients see value in the judgement, not just the output.
What kinds of safeguards do you think need to be in place as these tools become more widespread?

There are different layers. Some of it has to come from governments and platforms. The EU’s AI Act, China’s rules around mandatory labelling of synthetic content, Adobe’s Content
Credentials framework. These are all attempts to create transparency and accountability. But regulation is always slower than innovation.
That means responsibility also sits with studios, freelancers, and production partners. Practical things make a difference. Be explicit in contracts about what data can and cannot be used for. Add metadata or watermarking so performance data cannot be quietly repurposed. Follow guidance from unions and industry bodies like SAG-AFTRA, who are already building frameworks around likeness rights and AI.
And there is a big ethical principle here too. Motion data, body scans, voice recordings. These are not neutral files. They are biometric. They are personal. They carry identity. If we treat them with the same seriousness as personal data, then consent, ownership, and respect naturally follow. That is the cultural shift we need.
Read our last In Conversation with Ashley:
You sound cautious, but you have also said you are optimistic. What is giving you confidence right now?
Because the technology itself is not the enemy. If we set standards high, if we build ethical practices into the workflows, then generative tools can actually open doors.
Think about it. Access to tools that once required Hollywood budgets is now on the desktop. Motion capture, world building, animation. All at a scale that a student or a small studio can touch. That democratises creativity. That means more voices, more experimentation, more innovation.
But it only works if the ecosystem is healthy. If we flood it with slop, then no one wins. If we use it to empower creators rather than replace them, if we are honest with clients about what quality looks like, and if we keep centring the performer and the human story, then this next phase could be incredibly exciting.
So yes, there are dangers, but there is also a huge opportunity. It is on us as an industry to make sure we take the second path.
Stay tuned for further discussions with Ashley!






Comments