Digital Replicas: Who Owns a Person’s Likeness?
- Rebecca Flynn
- Aug 8
- 5 min read

In conversation with Ashley Keeler, Head of StudioT3D and CTO, Target3D
When you talk about digital replicas, what exactly do we mean - and how far does a ‘likeness’ extend?
Sure. It's easy to think about someone’s physical appearance. In light of the recent SAG-AFTRA video game actors’ strike, we've been talking a lot about that - and also about vocal performance: people’s voices being synthesised and used without permission…on train platforms or as OpenAI virtual assistants.
But what we don’t talk about enough is motion and movement characteristics - obviously, that's what we as a studio are focused on.

The way a performer moves/their gait - this is probably the least discussed component of what comprises a full persona or likeness - and it’s also the least rewarded. Buyouts still aren’t standard for mocap shoots. We’re in a better place when it comes to compensating voice actors than we are with motion capture actors.
A lot of people dismiss the idea of motion being core to someone’s identity. But I’ve always been adamant: if we mocapped everyone in the studio and showed their skeletal motion on screen, I’m pretty confident we could collectively work out who was who.
Everyone has a fingerprint in their movement - it’s very clear, especially if you look at a lot of specific actors’ data. So I think the concept of digital likeness MUST include their motion data too.
In the metaverse, who owns your face and your motion right now?
Great question. I guess a good place to start is with the idea that likeness should be owned by the person it’s based on!
It gets tricky when a character is created from a scan of a person - but then that’s built upon/augmented or morphed into something more. Similarly, say we run a mocap shoot, and the data gets licensed & altered or stylised - you could argue it’s now transformative, and someone might claim fair use.
This is a bit of a minefield for likeness and is often divisive in its reception.
A good example is when Peter Cushing was digitally recreated in Star Wars in 2016 using footage from 1977. Some people saw it as a tribute. Others found it unsettling - insincere, even.

I suppose it raises the question of whether we react differently depending on who is being recreated. Like, would there be the same backlash for a digital Henry VIII or Charles Dickens?
Where do we draw the line? Is there a model for likeness that follows the literary approach? Copyright protection for literary, dramatic, musical, and artistic works in the UK lasts for 70 years after the end of the calendar year in which the author dies.
That would make artists that died in the 50s fair game for recreation…here comes the Buddy Holly show? Feels weird, right?
Maybe there’s a sense that historical figures are part of a shared cultural heritage.
Can or should performers start viewing their image, voice, and motion as IP? And what are your views on protection?
I think that’s quite a simple answer, isn’t it? It’s a solid yes.
We’re talking about GenAI - there’s no way we’re going to put the toothpaste back in the tube. So yes, it’s going to be critical for actors and performers to be able to control their assets in that sense.
If we explicitly look at the video game actors’ clauses, they focus on four key things: control, consent, compensation, and transparency. And just to touch on that - there are some really interesting services starting to emerge, aimed at giving performers more control and licensing options.
Metaphysic Pro - that was one of the first to really make a splash. They’ve got people like Tom Hanks and Anne Hathaway on board - big names who want to preemptively protect their likeness and take control of how their digital replica is used in the future.
That kind of protection should be available across the board - not just for celebrities, but for emerging and independent performers as well.
There’s also TraceID from a company called Vermillio. It builds a digital profile of an actor and then actively scans the internet for signs that someone is using their likeness without permission. It’s designed to either take down the content or push for revenue where it’s due.

We’re also working with a PhD student from the Guildhall School of Music & Drama who’s exploring this exact space - how do we securely lock down and track likeness?
I’ve been looking into some of the platforms that let you build a digital twin of yourself. So you could create a digital ‘Mr X’, for example - and then define exactly what kind of content your likeness, voice, or video can be used in.
That kind of licensing could let you rent out “Mr X” to a company, but only in very specific cases. It’s about putting the power back into the hands of the individual, but we’ve a long way to go.
What are the risks of normalising digital doubles without clear regulation? What are the long-term implications?
There’s this term - “AI slop” - that’s started circulating. It basically refers to the massive flood of low-quality, AI-generated content. This isn't just a volume issue. It’s a quality problem.
We’re already seeing it with open-source AI tools. People generate a face, then generate another, then another. And each generation is based on data that has already been generated - so with every step, you lose fidelity. You lose sharpness. It’s a form of digital entropy.
So the fear is: as it gets easier to generate content and with the monetisation of platforms like TikTok - there’s less and less incentive to protect quality, or accuracy, or truth. People are chasing volume, views, and speed. Not care. And that’s dangerous.
Because when deepfakes become completely indistinguishable from real footage, we risk losing all trust in what we see.
See the Liar’s Dividend theory. You might watch a world leader say something shocking - but do you believe it? Should you? Can you?
That opens the door to disinformation, manipulation, and a collapse in trust around all visual evidence.
Yes, AI can help us detect fakes, but I think the conversation has to go further. There needs to be regulation, education, and real transparency from the platforms where this content is being shared.
What kind of regulation would you like to see? And who should be responsible? Platforms? Creators? Governments?
I think this needs to be a joint effort.
Governments absolutely need to step in. Some are already moving. China’s introduced laws that require AI-generated content to be clearly labelled. The EU’s AI Act has similar proposals. But enforcement? That’s still a big question mark.
Platforms carry a huge responsibility too. If you’re building a platform that makes it easy to generate or share deepfakes, then you have to make sure people know what they’re watching.
That could mean things like embedding metadata or using watermarking systems that survive compression and reposting. There’s tech already out there - like Content Credentials from the Coalition for Content Provenance and Authenticity - that’s trying to do exactly that.
But creators also need to be educated. A lot of people are playing with GenAI tools without fully understanding the ethical or legal implications. It’s fun. It’s free. It’s fast. But unless we figure out a system to track, credit, and protect original sources - whether it’s a voice, a movement, a face, or choreography - we’re in real trouble.

I think about this a lot working in mocap. People don’t realise - the way someone walks down the street is a kind of authorship. It’s their performance. And right now, there’s nothing stopping someone from capturing it and using it - without credit, without payment, without permission - this is not consent.
Stay tuned for further discussions with Ashley!
Comments