Mishcon de Reya page structure
Site header
Main menu
Main content section
Abstract image

The rise of deepfakes: navigating their impact on reputation and business

Posted on 20 March 2024

Artificial Intelligence (AI) has revolutionised the creation of visual content. Multiple AI-image generator platforms are now available, allowing individuals and businesses to create content with limitless creativity, scalability, and efficiencies in cost and time. But, as with most forms of rapidly evolving technology, the law is slow to catch up with people using it for nefarious purposes.

In the last few years, an increasing number of deepfake images and videos have made it into the mainstream. Deepfakes are non-consensual, AI-generated or altered images, videos or audio, which purport to show a person doing something specific but are actually fabricated. Advances in technology mean that deepfakes are becoming increasingly sophisticated. Some are extremely lifelike, and even with knowledge of typical red flags, difficult to spot. Not only that, but they are now quick, cheap, and easy for virtually anyone to create.

Threats Posed by Deepfakes

What started off as novel and innocuous, generating or tampering with images to create humorous satirical and parody content, has now snowballed into a tool that is used for very real and serious harm.

Disturbing statistics show that in 2023, 98% of all deepfake videos found online were pornographic or intimate images, and 99% of the individuals in the images were women. High-profile examples include Taylor Swift and Jenna Ortega, but ordinary people are equally at risk of deepfake images being used to violate, embarrass, extort, and blackmail them.

In a year of upcoming elections in many countries, deepfakes are also undermining political figures and threatening democracy by spreading deliberate misinformation. For example, audio deepfakes emerged in the voice of Joe Biden, telling voters not to vote in the New Hampshire presidential primary.

Deepfake fraud material has reportedly increased by 3,000% in 2023. Deepfake voice and facial imitation are frightening examples of how cyber-hackers and fraudsters have impersonated people to gain access to bank accounts, or blackmailed people into handing over money or sensitive commercial information. The risk is not only business harm; attacks involving deepfakes also have the potential to cause serious reputational damage, if misinformation and false allegations are spread online or sent in targeted communications to key stakeholders, such as clients, employees and banks. 

Current Legislation on Deepfakes and AI

Recent legislative changes in the UK brought in by the Online Safety Act 2023 and inserted into the Sexual Offences Act 2003 have criminalised the sharing or threatening to share intimate deepfakes. This is world-leading legislation in many respects, but one of its limitations is that it does not extend to the creation of deepfakes. Hiding behind various layers of anonymity on the internet, it is often hard for victims to identify exactly which individual(s) shared the particular deepfake(s). This, coupled with the difficulty in the UK of pinning legal liability on the platforms that host non-consensual pornographic and nude deepfakes, is a huge barrier to justice.

In the US, two important pieces of legislation have been introduced as bills: the Preventing Deepfakes of Intimate Images Act and the Disrupt Explicit Forged Images and Non-Consensual Edits (or DEFIANCE Act). Together, this legislation (once passed into law) will criminalise intimate deepfakes in the US and allow victims to take civil action against those involved in the creation and distribution of deepfakes.

The European Parliament has also just approved the EU Artificial Intelligence Act, which introduces an outright ban on AI tools deemed to carry unacceptable risks (such as those used to classify individuals based on their social behaviour or personal characteristics) and imposes strict regulation on "high risk" or "limited risk" AI. Interestingly, deepfakes are deemed "limited risk", and the focus is on transparency: Article 52(3) requires any "deployer" of a deepfake to disclose that it has been artificially generated or manipulated. The Act applies to any organisation implementing systems in the EU, serving EU-based users (even if the supplier organisation is based outside the EU), or utilising AI outputs within the EU.

Legal Challenges

At present, few deepfake or AI-generated content cases have made their way through the courts to judgment, so the legal landscape is still in its infancy. However, individuals and businesses should understand the potential risks and legal implications of creating and sharing AI-generated content, and potential recourse they might have if targeted by an attack involving deepfakes.

There are a number of legal remedies – spanning civil and criminal law - that can stem harm arising out of a deepfake attack and be used to seek recourse against perpetrators. Action must be decisive, robust and swift. It is also important to take proactive steps to future-proof reputation as far as possible from potential attack. Content creators themselves, who use AI, should ensure they do so in a way that is compliant with rights to intellectual property, data protection and privacy, and bearing in mind potential harm to reputation.

Mishcon de Reya advises businesses, entrepreneurs and emerging companies utilising AI-generated content and/or developing AI tools. We aim to ensure that these services are built and delivered in a way that is legally compliant and protected as far as possible from scrutiny in a somewhat unmapped and complex legal landscape. At the same time, we are increasingly assisting clients targeted by hackers or fraudsters, or otherwise subject to online abuse and/or misinformation, where AI content is part of the problem.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else