Mishcon de Reya page structure
Site header
Main menu
Main content section

Future fakes- what to do when you can no longer believe your eyes

Posted on 8 June 2019

Deepfakes – hyper realistic and convincing fake videos generated through AI technology – are potentially one of the most disturbing tech developments of recent times. Doctored video footage is nothing new but AI significantly enhances the realism of fake video, enabling unsaid words to be put into an individual's mouth. From politicians and film stars to CEO and CFOs - the ability to manipulate existing images of public figures, or simply those with sufficient visual online presence, to create fake footage, virtually indistinguishable from the real thing, has far reaching implications. Personal, financial and democratic credibility is at stake. What happens when you can no longer believe your eyes?

The role of artificial intelligence in enabling as well as detecting fakery and fraud was discussed at the recent 22nd International Fraud Group conference. The novelty of Deep Fake video ensures considerable media interest, possibly distorting perceptions of the risk that they pose. Production of the most realistic deep fakes requires substantial financial and technological resource, currently likely only within the reach of larger state actors. However, the technology will inevitably become cheaper and more widely accessible, resulting in highly sophisticated fakes and a marketplace to match. While that day is not here yet, now is the time particularly for public figures to prepare. Critical actions include ensuring a strong narrative underpins their online persona, preferably reinforced by third parties, and having a contingency plan to respond if faked footage does emerge. This places individuals on the front foot in swiftly identifying and contradicting faked footage if it emerges.

More broadly, the defining change that AI brings to the fraudster's toolkit is an ability to act at scale – to manipulate multiple sources of data, visual or otherwise, quickly and efficiently, creating ever more convincing and complex deceptions to achieve their aims. To combat this requires a collaborative, multidisciplinary response, in which the power of AI can be harnessed to accelerate fraud detection, mitigation and prevention.  

While tempting to believe emerging technologies can be the silver bullet, even the most advanced machine learning programme will not be sufficient in isolation. Technology is one of three critical components in fraud detection – managing the human aspect through training, and putting in place robust processes are essential to complement what AI can bring to the table. Organisations should also adopt a holistic top-down approach to fraud – that is, elevate the issue to the board room and ensure an organisation-wide strategy for identifying, preventing and mitigating fraud. To combat increasingly organised and systematic criminals effectively, financial institutions and other organisations need to strategise outside business unit silos.

AI-enabled solutions are, if not a complete answer in themselves, nonetheless reaching an important tipping point in fraud investigations and litigation. Investigators and lawyers increasingly feel able to trust the tech tools at their disposal, enabling them to reach more accurate conclusions more quickly. Predictive coding and cognitive analytics, such as natural language processing (NLP) and sentiment analysis, for example, are two areas where significant advances are being made. Machines can now analyse a mass of documents and determine not only those which are relevant but subsequently detect relevant information and concepts embedded in the text of these documents with 60-70% confidence. The core characteristic underpinning these new technologies is looking at patterns in data to identify the people rather than the act – fraudsters instead of frauds.

So where next? A logical next step, as these tools evolve towards greater precision and accuracy, is to deploy them proactively to detect and stop fraud in its tracks earlier. Wholesale prevention of fraud is likely to be a challenge – fraudsters will inevitably adapt and evolve to evade detection – but earlier mitigation will reduce its financial impact.  

International Fraud Group
Future Fakes- what to do when you can no longer believe your eyes

 

Adam Kennedy
Head of Business Development, Dispute Resolution
Mishcon de Reya LLP

This is the IFG’s future view of international fraud and this year pretty exciting we are talking about the role of AI technology both in enabling new kinds of fraud and then more importantly for our guys, encountering those kinds of fraud as well.

 

Deepfake Reconstruction of Gary Miller the IFG Co-Founder
This is one of our favourite nights of the year and it pains me greatly not to be able to be with you this evening.

 

Paul Clandillon
Lead Counter-Fraud and Financial Crime For Europe
IBM

Deepfake poses a clear and obvious threat to society in terms of the ability to manipulate sentiment across societies, the ability to plant statements in people’s identities, the ability to undermine reputations. Deepfake tends to be within the ambit of political and state actors who have deep technology organisations rather than fraudsters who are simply trying to steal money.

 

Gemma Evans
Technology Presenter and Journalist

I read a quote in the Washington Post last week warning that the 2020 US elections could be the elections of deepfakes which is a pretty scary thought.

 

Philip Hall
Partner
Portland Communications

Four things. I think first take the problem seriously. I think second make sure that you have a strong profile in the market that people know who you are so you can counter what can be frighteningly realistic misinformation. Thirdly make sure you know when it happens and fourthly, have a plan in place to deal with it.

 

Kat Barry
Strategy Manager
Mishcon de Reya LLP

I think technology definitely is enabling fraud.  It is making it easier for fraudsters to get away with things. Equally I think there are lots of applications now for technology now that we are starting to get a better understanding of what it can actually do for us.

 

Karyn Harty
Partner
McCann Ftizgerald

It is interesting what Paul was saying about it really being about behaviours rather than necessarily about technology is so true because in every case where you come across something that is cyber fraud it is invariably because somebody was caught out.

 

Kasra Nouroozi
Partner
Mishcon de Reya LLP

People are the perpetrators of fraud. Machines aren’t. Machines are the tools.  The sophistication level means that they can be faster, they can be more clandestine but the root through which they go and the scenarios are almost always the same.

  

 

 

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else