Mishcon de Reya page structure
Site header
Menu
Main content section
Person's hands typing on laptop

Russian "LameHug" malware uses GenAI to automate attacks

Posted on 13 August 2025

What happened? 

In analysis heralded as the first of its kind, Ukrainian authorities have published a report into malware that employed an AI Large Language Model (LLM) to generate commands. 

On 17 July 2025, the Ukrainian government's Computer Emergency Response Team (CERT-UA) published a report1 on a piece of malware they called LAMEHUG, which was publicly employing the use of the Huggingface Qwen2.5-Coder LLM to generate commands and enhance the functionality of the malicious code. Although the rationale was not provided, CERT-UA attributed the malware to a Russian government threat group known as APT28, with moderate confidence. 

CERT-UA discovered the malware from an email campaign sent from compromised accounts impersonating Ukrainian ministry officials and aimed at recipients in government bodies. The emails had a compressed ZIP attachment containing the malware, which ultimately functioned to conduct system reconnaissance and steal data. 

So what? 

It is no surprise that cyber espionage campaigns should target government personnel, a common tactic of modern cyber operations, which typically aim to compromise data of key personnel in government institutions for strategic advantage. 

However, the use of GenAI tools in these attacks is both intriguing and perhaps inevitable. This incident marks a shift in the cyber threat landscape, with GenAI enabling attackers to automate and enhance tasks that were once manual and required human creativity. 

GenAI has already arguably changed software development forever, allowing programmers to automatically generate new code, optimise and debug existing scripts, and automate previously manual tasks. Yet, as demonstrated by this case, these capabilities can be exploited by malicious actors to accelerate their operations. 

On the flip side, the impact of the LLM in the LAMEHUG malware is relatively minor. It may have reduced a few hours of human input rather than posing a major threat to its targets. 

It is highly likely that malware developers, alongside their ethical counterparts, will continue to seek ways to expedite code development. We expect to see increased use of GenAI models in malware and other attack chains, as well as in exploiting the data eventually stolen. What remains to be seen is whether this approach will significantly enhance the efficacy of attacks or merely provide marginal improvements to attackers’ day-to-day operations. 

A more worrying prospect is that attackers will benefit from AI automation in real time, during an attack. It is perhaps foreseeable that attackers could in future query AI models in an automated way to understand the environment they are in and how best to exploit it. If this materialises, it will no longer be a human hacker probing for weaknesses, it will be AI operating at much greater speed and, eventually, precision. 

The integration of GenAI in malware like LAMEHUG highlights the dual-edged nature of technological advancements. While GenAI offers immense potential for innovation, it also presents new challenges in cybersecurity. As AI continues to evolve, so too must our defences, ensuring we remain vigilant and adaptive in the face of emerging threats. 

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else