Mishcon de Reya page structure
Site header
Menu
Main content section
abstract blue lights

Deepfakes of the deceased: Ethical dilemmas and implications on legacy and reputation

Posted on 30 October 2025

As AI continues to develop at a staggering pace, advanced AI-generative tools are becoming more widespread and easily accessible, sparking concerns about its uses. One major issue is the proliferation of deepfakes - non-consensual, AI-generated or altered images, videos, or audio depicting a person doing or saying something which is actually fabricated.  

At the end of September, OpenAI launched Sora 2, its latest audio and video generation model which boasts sophisticated, hyper-realistic video creation. Since its launch, videos created using Sora 2 have flooded social media platforms. This has highlighted a significant issue: the creation of deepfake videos depicting deceased persons, who are unable to consent to their likeness being used by AI tools such as Sora 2, and the impact this may have on their families and their legacy.  

Tool for good?  

For some, the chance to bring loved ones back through a digital reanimation is a comfort as they navigate loss and may help them to grieve. In Hollywood, actors are being digitally revived to posthumously star in films, such as Carrie Fisher as Leia in Star Wars: The Rise of Skywalker. Earlier this year, the BBC offered writing classes presented by a digitally reconstructed Agatha Christie.

Though for family members not involved in the creation of these depictions, it can be a distressing and harrowing situation to be faced with. A recent surge of cases involving public figures has highlighted the moral and ethical dilemmas involved in reanimating the dead without their consent. Zelda Williams, the daughter of the late actor Robin Williams, made an emotional plea on Instagram earlier this month asking that people stop sending her AI deepfakes featuring her father: 

"Please, just stop sending me AI videos of Dad. Stop believing I wanna see it or that I'll understand, I don't and I won't. If you’ve got any decency, just stop doing this to him and to me… it's NOT what he'd want".

The risk of harm to reputation and legacy 

Zelda Williams' post continued, "To watch the legacies of real people be condensed down… is maddening". This highlights how such videos can not only cause harm to grieving families, but also damage the reputations and legacies of those depicted, who are no longer able to control or consent to how their likeness is being used. A proliferation of fabricated but extremely lifelike content depicting things someone never actually said or did could lead to a denigration of their reputation, hard-earned throughout their lifetime.  

The launch of Sora 2 has heralded a new and dangerous era of digitally generated clips, particularly for families of deceased public figures. While OpenAI was careful to put in place rules and guardrails to ensure Sora 2 can only be used to replicate real (living) people with their consent, deceased or "historical" public figures were initially missing from this protection.

Online users were quick to generate distasteful and outright abhorrent content capitalising on this exemption, which rapidly went viral across social media platforms. Controversial examples include depictions of John F. Kennedy joking about the death of right-wing influencer Charlie Kirk and a racist video of Dr Martin Luther King Jr making monkey noises delivering his famous "I Have A Dream" speech.

This sparked outrage and pushback from the families of prominent individuals depicted. On 17 October 2025, OpenAI and the Estate of Martin Luther King, Jr., Inc released a joint statement explaining that "Open AI has paused generations depicting Dr. King as it strengthens guardrails for historical figures" and that representatives of the deceased will be able to "request that their likeness not be used in Sora cameos". This is a positive development. Our team has successfully asserted these rights on behalf of families of high-profile deceased persons, stopping the unauthorised use of their likeness. We understand the urgency and sensitivity required in these matters. 

However, the fact that OpenAI have made this an "opt-out" as opposed to "opt-in" system seems deeply flawed. It places a significant burden on families requiring them to take proactive steps to prevent the legacies and reputations of their loved ones being abused, particularly in circumstances where so much emotional distress has been caused.   

The question also remains as to the obvious problems for families of the individuals involved, where there is a wealth of content that already exists and has spread across social media platforms. As the saying goes, you cannot put the genie back in the bottle.  

Legal landscape in the UK  

From our experience working on cases involving deepfakes, the key difficulty for those affected is the complexity of the legal landscape. There is no overarching legislation specifically governing deepfakes, but living victims have a better chance of relying on existing, albeit complicated and patchwork, laws such as intellectual property, data protection, defamation, breach of privacy and/or confidence.

The position for deceased victims is more difficult as they cannot give consent and typically do not have continuing legal rights after their death. They may have some grounds under copyright law where a copyrighted work (such as photo, video, or sound recording) has been used to generate a deepfake.  

It is difficult to predict at this early stage how social media platforms will react to requests to remove deepfake content of the deceased from their platforms, or whether they will adapt their terms and conditions in reaction to the fallout from Sora 2. Nevertheless, where content is clearly problematic and breaches a platform's terms and conditions there may be grounds to complain, irrespective of whether the subject is alive or dead.  

Looking to the future 

The future remains uncertain. Whilst the Government has recently introduced legislation criminalising the sharing (or threatening to share) and creation (or requesting creation) of intimate deepfakes without consent (the latter not yet in force, although enactment is eagerly awaited), there do not appear to be any current plans to widen this to cover other harmful deepfake content, including to protect the deceased and their families. Without swift and decisive legal and/or regulatory action, the deceased and their families remain at risk of harm from future deepfake video generation tools.  

We continue to monitor developments in this area and regularly engage with tech companies and social media platforms to advocate for stronger protections for victims and families. 

How Mishcon de Reya can help 

We are facing an untested and unprecedented situation, which is extremely complex and rapidly evolving. If you or your family have been affected by deepfake content, or if you are concerned about the digital legacy and protecting the reputation of a loved one, our specialist Reputation Protection and Crisis Management team is here to help.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else