In brief
- The UK lacks overarching deepfake legislation, leaving victims facing a complex patchwork of existing laws including intellectual property (IP), data protection, defamation and malicious falsehood.
- While the Government has recently introduced criminal sanctions for sharing non-consensual intimate deepfakes (via the Online Safety Act 2023), and provisions criminalising their creation (in the Data (Use and Access) Act 2025, not yet in force), significant gaps remain.
- Detection and enforcement present substantial challenges for individuals, with perpetrators often difficult to identify and frequently based overseas, beyond UK regulatory reach, whilst platforms are often slow to remove deepfake content.
- The Government's current consultation on AI and copyright may include consideration of whether more controls should be given to performers over the use of their likenesses and performances.
- The EU AI Act meanwhile includes transparency requirements for deepfakes including machine-readable marking and disclosure obligations, though practical implementation challenges remain.
The use of AI tools is proliferating and becoming mainstream. Allied to fast-moving developments in the technology, it is becoming increasingly difficult to distinguish AI-generated content – including deepfakes (i.e. images, video or audio intended to impersonate an individual's likeness or voice) - from human-generated and authentic content. Deepfake technology isn't, in itself, particularly new, but the ease and scale with which deepfakes can now be produced and disseminated, without easy detection or challenge, has led to urgent calls for a review of regulation in this area.
'Digital replicas' (a more benign expression for 'deepfakes') can, of course, be created for positive uses. The technology has been used to de-age the actor Harrison Ford in the movie Indiana Jones and the Dial of Destiny and to reanimate deceased actors (such as Carrie Fisher) on screen. But, when digital replicas are made without consent, they can be put to more nefarious uses. Ofcom summarised these risks well in a recent study when it noted that deepfakes can be used to "demean, defraud and disinform". Many famous people have been the subject of deepfakes, from Taylor Swift through to Stephen Fry and the financial journalist Martin Lewis, but the problem also impacts non-celebrities, and sometimes in devastating ways.
No overarching regulatory framework
The problem for those impacted is that there is no overarching law regulating deepfakes in the UK. Instead, there is a patchwork of existing laws (for example, IP, data protection, defamation, malicious falsehood) alongside existing laws that meet particular harms (such as the use of deepfakes in fraudulent activity). Importantly, current regulatory focus is on the creation and dissemination of non-consensual intimate images in the form of deepfakes, where the Government has taken a number of steps to introduce criminal sanctions, with more developments to come shortly. These developments have been hard fought for, and greatly welcomed by campaigners, but there are still gaps in the legislation, for example, there is nothing yet to address "nudifying" or "undressing" apps, which remove clothing from images.
Difficulties in detection and enforcement
In addition to the overly complex nature of the current regulatory framework, those impacted by deepfakes face the additional difficulty of tracking down those who create or disseminate such images. Even if they can be identified, the perpetrators are often based overseas and out of reach of the UK regulatory authorities. While contractual protections may assist for some individuals (for example, performers, who may wish to contract against having their performance used to train an AI model), there is no one size fits all approach to this enforcement question. Accordingly, in addition to enhanced regulation, many are looking to the role of the AI model developers and the large tech platforms (including social media) in detecting and expeditiously removing such content, enforcing their terms of use, and, where possible, preventing such content being generated in the first place. But our experience has been that the platforms are often slow to react, which can be detrimental where content can go viral rapidly online.
Potential claims against deepfakes
There are some potential claims that an individual might make in relation to the use of their likeness (image or voice) in a deepfake, some of which are currently less relevant to non-celebrities, but where we may see calls to broaden out the protection available.
Intellectual property rights
In the UK, there are certain forms of IP rights that might be available to provide protection for an individual's likeness. However, there is no form of personality right or image right in the UK (unlike in some other countries). Potential IP rights that might arise include:
- Copyright: while there is no copyright in an individual's voice or image, there is likely to be copyright in a photograph or video of an individual, or in a sound recording of their voice. If those copyright works are reproduced without the copyright owner's consent (e.g. during the training of an AI model), arguments of copyright infringement may arise. However, the difficulty here is that often the individual who is the subject of e.g. the photograph is not the copyright owner of that photograph. Individuals may also find that any relevant copyright works have been licensed to AI model developers to train their models.
Separately, performers have certain rights in their performances (there is no requirement to be a celebrity to rely upon these rights), as well as certain moral rights (though in practice moral rights are often waived by performers). While these rights may be relied upon to tackle unauthorised uses of a performance, the performers' union, Equity, has called on the government to strengthen performers' rights to encourage licensing and prevent unauthorised AI-related uses. In particular, Equity is lobbying for increased transparency measures and additional rights, including in relation to performance synthesisation, image rights and unwaivable moral rights. It is also concerned about the terms of contracts used by production companies for training AI models/generating digital replicas, citing the example of a performer whose likeness was used as a 'performance avatar' and who later discovered it being used to promote the Venezuelan government.
Copyright may, however, have a greater role to play in relation to deepfakes going forward. The Danish government is considering using copyright law to regulate deepfakes by making unauthorised sharing of AI-generated deepfakes illegal, including in relation to deepfakes of non-celebrities. Individuals would be able to demand removal of the images, as well as compensation, and the right would last for up to 50 years after their death. Meanwhile, the central proposal of the US Copyright Office's report on Digital Replicas is a new federal law to deal with unauthorised digital replicas (again, which would be available for all individuals, not just celebrities), on the grounds that existing laws in the US do not provide sufficient legal redress. A number of US sates have also proposed such laws.
In the UK, the Government is currently conducting a consultation process in relation to AI and copyright. While the consultation does not formally consult on specific proposals on digital replicas and personality rights, the Government has said that it is keen to hear views on the topic. This could include whether the current legal framework provides sufficient control to performers over the use of their likenesses/performances (perhaps involving consideration of whether performers should be able to opt their performances out of being used to train AI models).
- Trade marks and passing off: Celebrities, such as Rihanna and former motor racing driver Eddie Irvine, have had some success in bringing passing off proceedings for the use of their image to advertise a product, on the grounds that this amounts to a false endorsement. While such claims may assist in similar situations involving deepfakes of celebrities, it will be much more difficult for a non-celebrity to get such a claim off the ground.
Data protection
Information which "relates to" an identified or identifiable individual is their "personal data", and will, as a general principle, mean that the data subject has rights arising, and those who process the personal data have obligations imposed on them. "Inaccurate" data is still personal data, and, by extension, there is certainly a strong argument that a deepfake of an identifiable individual will also be their personal data. This means that affected individuals potentially have the right to request erasure, or to bring complaints or claims, under the UK GDPR.
Defamation
A deepfake could give rise to a claim in defamation if it contains false and defamatory information and causes the subject serious reputational harm. Consider a politician who becomes the subject of a fake video where they admit to wrongdoing. The merits will depend on multiple factors including the meaning, nature and extent of publication of the deepfake, and the evidence of reputational harm. There may also be problems locating and identifying the source of the deepfake/its author, problems establishing the liability of any platform hosting the deepfake, and jurisdictional hurdles if they/the platform are based outside of the UK.
Breach of privacy and/or confidence
Where a deepfake contains true but private and/or confidential information, the subject may be able to bring a claim for misuse of private information and/or breach of confidence if they did not consent to the information being used and shared in this way. What constitutes "private information" is not defined in law, but it is established that it includes information such as: medical information, details of a person's sexuality and sex life, and details of their home or family life.
Non-consensual intimate image deepfakes
The UK Government has recently introduced various pieces of legislation aimed at criminalising conduct around non-consensual intimate deepfakes. As of 31 January 2024, legislation brought in by the Online Safety Act 2023 and inserted into the Sexual Offences Act 2003 criminalises the sharing, or threatening to share, of intimate deepfakes without consent. In addition, the Data (Use and Access) Act 2025 which has recently received Royal Assent, contains provisions criminalising the creation, and requesting of the creation, of intimate deepfakes without consent (note that these provisions are not yet in force, although their enactment is eagerly awaited).
Wider regulatory responses
The EU's AI Act is a wide-ranging piece of legislation regulating the development and deployment of AI, including generative AI. One of the bedrocks of ensuring trustworthiness and integrity of AI systems is a robust framework of transparency requirements which enables people to know when they are interacting with or are exposed to AI systems and their outputs (including deepfakes or other manipulated content). In that context, the EU AI Act contains a number of transparency requirements, including in relation to deepfakes, which will start to apply from 2 August 2026.
The European Commission has recently published a consultation on the AI Act's transparency requirements. The responses to its consultation will inform the drafting of Commission guidelines and a Code of Practice on the detection and labelling of artificially generated or manipulated content.
Specifically, in relation to deepfakes and other generated content, Article 50 of the EU AI Act requires:
- Providers of AI systems that directly interact with individuals to ensure they are informed they are interacting with an AI system and not a human (unless this is obvious to a reasonably well-informed, observant and circumspect individual in the circumstances and context of use). For example, the Archival Producers Alliance has published guidance on best practices for the use of Generative AI in Documentaries which include providing visual vocabulary that alerts the audience to GenAI use, such as a unique frame around the material, change of aspect ratio etc.
- Providers of AI systems to facilitate detection and identification of AI-generated or manipulated content by marking such content in a machine-readable manner and enabling related detection mechanisms (e.g. metadata identification, cryptographic techniques and watermarking).
- Deployers of AI systems generating or manipulating deepfake content to provide information about the origin of the content. However, where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, these obligations are limited to disclosing the existence of the generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.
Of course, the position in relation to transparency and labelling of AI content is not straightforward, both legally and practically. Many organisations, for example, have partnered with the Coalition for Content Provenance and Authenticity (C2PA) to add labels to AI-generated content (e.g. LinkedIn). These tags are automatically added based on embedded code data in the images, as identified by the C2PA process. However, this may easily be circumvented by stripping the metadata from digital files. It must therefore be anticipated that the discussions around the proposed Code of Practice will lead to a range of (potentially conflicting) viewpoints that may require compromises to be reached in certain areas.
Practical steps
While a number of legal measures are available for individuals who find that their likeness or voice has been used in a deepfake (as well as preventative measures to protect against creation in the first place), the framework for taking action remains a complex one, and so we would recommend anyone impacted to seek specialist legal advice. Those needing support with non-consensual intimate image deepfakes can contact services such as the Revenge Porn Helpline, who provide free assistance with the removal of intimate imagine including deepfakes shared without consent from the internet. The Police also have published guidance on reporting potential criminal offences involving deepfakes.
If you would like to discuss issues relating to deepfakes, including how to take action to protect against digital replicas being created and shared, please get in touch with a member of the team.