Mishcon de Reya page structure
Site header
Main menu
Main content section
Abstract AI lights

The impact of deepfakes on the UK general election

Posted on 11 June 2024

The use of deepfakes, and their sophistication, is accelerating. The risks to democracy from manipulated content – such as audio or video clips that incriminate politicians, or doctored campaign materials – are obvious. With a general election weeks away, just how real and widespread are they?

Prime Minister Rishi Sunak, Labour leader Keir Starmer and London Mayor Sadiq Khan have already been targeted. In January, research by communications company Fenimore Harper revealed that more than 100 video adverts impersonating the Prime Minister, reaching as many as 400,000 people, had been promoted on Facebook in the last month alone, including BBC News reports of an alleged corruption scandal. Faked audio circulated last year of Sadiq Khan suggesting that Armistice Day commemorations should be postponed in favour of a pro-Palestine March. And Keir Starmer's voice was used to create a fake recording of him verbally abusing staff. Social media company X, which makes clear that users "may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”)", refused to remove the clip, arguing that it could not prove the recording was fake. It pointed to another section of the same policy: "In situations where we are unable to reliably determine if media have been altered or fabricated, we may not take action to label or remove them."

Most of the world's largest tech companies, including Meta, Google, X and Microsoft, have since signed an accord pledging to tackle deceptive artificial intelligence (AI) in elections, but the response from X to the Keir Starmer deepfake highlights one of the key challenges of tackling such material after the event. Publishers need to act fast to stop deepfakes spreading and causing damage, but they also need to be sure they are removing genuinely fake content, particularly given the parallel trend for certain actors to dismiss real content as 'fake news'. The ability to quickly demonstrate that a recording has been fabricated will inevitably be more difficult for the subjects of the most sophisticated campaigns, where the stakes are likely to be highest.

Meta's President of Global Affairs, Nick Clegg, warned in February that it was "not yet possible to identify all AI-generated content", and that people could strip out invisible markers that would otherwise signal that content has been manipulated. He urged users to be alert to other signs of AI-created content, such as checking whether the account sharing the content is trustworthy, or looking for details that might look or sound unnatural. In the UK, the Government has issued practical guidance for electoral candidates and officials, to be read in conjunction with Defending Democracy guidance from the National Cyber Security Centre.

Are there any policies or mechanisms that are truly proactive and aim to tackle deepfakes at source? In the UK, in relation to intimate deepfakes, the Government had proposed criminalising the creation of such content via an amendment to the Criminal Justice Bill, but that Bill did not survive the brief 'wash-up' period before Parliament was dissolved ahead of the Election. The existing offence of sharing intimate deepfakes, introduced by the Online Safety Act 2023 as an amendment to the Sexual Offences Act 2003, remains. An equivalent provision may be revived by the new Government, and it may be extended to cover the creation of all deepfakes, although not, of course, in time for the Election.

We may well see, in the run-up to the election, further examples of specific deepfakes that gain traction and, conceivably, influence the outcome. Much will depend on how, and how quickly, the deepfake targets, as well as key publishers, respond. Perhaps however, the greatest damage will come not from individual deepfake content, but a general erosion of trust in politicians and their output. This might even be compounded by the public's growing awareness of AI, the sense that nothing and no one is actually reliable. In elections where public trust is low and credibility is weaponised for political gain, this is deeply concerning. Insofar as it is possible to measure, it remains to be seen – from polls, surveys and data collected by tech companies – just how great the impact of deepfakes on the election might be.

How can we help you?

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else