AI has a critical role to play in the fight against fraud. But fraudsters are themselves making increasing use of AI, both to enable the fraud in the first place, but also to increase its impact. The role of artificial intelligence in enabling as well as detecting fakery and fraud was discussed at the recent 22nd International Fraud Group conference.
Deepfakes – hyper realistic and convincing fake videos generated through AI technology – are potentially one of the most disturbing tech developments of recent times. Doctored video footage is nothing new but AI significantly enhances the realism of fake video, enabling unsaid words to be put into an individual's mouth. From politicians and film stars to CEOs and CFOs - the ability to manipulate existing images of public figures, or simply those with sufficient visual online presence, to create fake footage, virtually indistinguishable from the real thing, has far reaching implications. Personal, financial and political credibility is at stake.
The novelty of Deep Fake video, ensuring considerable media interest, may possibly distort perceptions of the risk that they pose. Production of the most realistic deep fakes requires substantial financial and technological resource, currently likely only within the reach of larger state actors. That said, the technology will inevitably become cheaper and more widely accessible, resulting in highly sophisticated fakes and a marketplace to match.
More broadly, the defining change that AI brings to the fraudster's toolkit is an ability to act at scale – to manipulate multiple sources of data, visual or otherwise, quickly and efficiently, creating ever more convincing and complex deceptions to achieve their aims.
But on the other side of the equation, AI-enabled fraud detection and prevention solutions are reaching an important tipping point in fraud investigations and litigation. Investigators and lawyers increasingly feel able to trust the tech tools at their disposal, enabling them to reach more accurate conclusions more quickly. Predictive coding and cognitive analytics, such as natural language processing (NLP) and sentiment analysis, for example, are two areas where significant advances are being made. Machines can now analyse a mass of documents and determine not only those which are relevant but subsequently detect relevant information and concepts embedded in the text of these documents with 60-70% confidence. The core characteristic underpinning these new technologies is looking at patterns in data to identify the people rather than the act – fraudsters instead of frauds.
While tempting to believe emerging technologies can be the silver bullet, even the most advanced machine learning programme will not be sufficient in isolation. Technology is one of three critical components in fraud detection – managing the human aspect through training, and putting in place robust processes are essential to complement what AI can bring to the table. It is only through a collaborative, multidisciplinary response that the power of AI can be harnessed to accelerate fraud detection, mitigation and prevention.
So where next? As AI tools evolve towards greater precision and accuracy, fraudsters will inevitably adapt and evolve to make greater use of them. But those looking to detect and prevent fraud will equally deploy them proactively to detect and stop fraud in its tracks. The role of AI within fraud looks like a battle that is set to continue.