Mishcon de Reya page structure
Site header
Main menu
Main content section

Using artificial intelligence in cybersecurity technologies: total defence or best defence?

Posted on 26 October 2023

While not new technology, the pace at which artificial intelligence (AI) has become an integral component of our society in recent years is astounding. AI solutions have increasingly been adopted for cybersecurity purposes by alerting organisations about abnormal data or network access, producing automated incident responses and summaries, flagging suspicious activities and undertaking risk analyses of attempts to access business' networks.

The use of AI in a cybersecurity context, however, is complicated by a growing data deluge, a shortfall of skilled security professionals, and the fact that many organisations have hundreds (if not thousands) of devices that are authorised to access their networks. Given the catastrophic consequences that could result from the failure of AI to prevent cybersecurity breaches, the topic will be a key focus at the AI Safety Summit at Bletchley Park on 1 and 2 November 2023, where the Prime Minister will meet with international ministers, businesses and experts.

We consider some of the issues that may be raised at the Summit below and provide practical guidance on how to mitigate against them.

Misunderstanding AI

A major threat to the safe use of AI in a cybersecurity context is a misunderstanding of the technology itself. AI's sophistication and its tendency to be imbued with human-like cognitive abilities could lead cybersecurity teams to be overly trusting of its analytical functions, eventually minimising human intervention and oversight. Limitations or biases in training data and algorithms could result in a failure to recognise security anomalies, potentially allowing a threat such as malicious code to penetrate an organisation's cybersecurity defences. It is therefore vital to keep perspective and remember that AI is only as reliable as the data on which it is trained and the instructions it receives. As such, AI systems should complement human security teams rather than entirely replacing them.

Sophistication of modern cyberattacks

While AI can expedite real-time responses to cybersecurity threats, helping to reduce the pressure on security operation centres (SOCs) and equivalent teams to protect businesses, the technology may also be used to power tools deployed by bad actors to detect and exploit weaknesses in networks. This complicates the terrain for those seeking to uphold the efficacy of their security barriers. For example, where international phishing attacks could previously be identified by awkward linguistic phrasing, the arrival of tools like ChatGPT means that criminals now have a way to make themselves sound fluent in any given language and thus gain a level of legitimacy.

Another instance of the deployment of AI to facilitate (rather than prevent) cyberattacks is the creation of what are known as 'adversarial examples'. This technique, which exploits the features that make AI powerful in the first place, involves the deliberate bad-faith insertion of specialised, near-imperceptible 'noise' to the input data relied upon by AI-powered tools that causes them to adjust their behaviour and make mistakes. A cyberattacker, for example, could attempt to alter the characteristics of a malicious network traffic pattern in order to normalise them to the AI system in question, such that future instances of that pattern may no longer be flagged to cybersecurity teams.

Defending against adversarial attacks is an active area of research. Proposed solutions include training AI systems on adversarial examples to make them more robust, implementing mechanisms to detect and reject adversarial input and developing more transparent AI models that allow for better understanding and scrutiny of their decision-making processes. As these are still in development, it is crucial that organisations ensure that any AI-powered cybersecurity tools that they implement are trained on reliable and up-to-date data. This should be a key area of attention in any contracts entered into for services delivered via AI technologies. Cybersecurity teams should additionally ensure continual human oversight of those tools, so that deviances in the data can be promptly identified and corrected. 

Remote working

The increase in flexible working following the COVID-19 pandemic has resulted in many workers relying on connecting employer devices to home networks. As home networks typically have less robust security than workplaces, they are often a good place for perpetrators to begin an attack. If an organisation uses AI for biometric identification (such as facial recognition or fingerprint scanning) to permit access to a secure network, it should be aware that this is not entirely without risk in relation to cybersecurity. Since AI systems learn to recognise behavioural patterns in order to authenticate users, a cyberattack capable of mimicking those behavioural traits accurately could bypass this security measure. Using biometric data in this way also exposes organisations to considerable legal and reputational repercussions in the event of a data breach, and biases within the tools deployed could unfairly discriminate against legitimate employees. This can be difficult to resolve, since many AI systems are 'black boxes' that do not provide clear explanations for their decisions.

Businesses deploying AI tools to govern remote access should therefore consider requiring the providers of such technologies to commit to transparency regarding the algorithmic logic underpinning the relevant decision-making processes. Overreliance on these tools for monitoring remote access should be avoided; rather, additional technical and organisation measures should also be implemented, such as requiring employees requesting remote access to connect to an approved virtual private network (VPN) or to undertake two-factor authentication.

Employee training

Implementing AI-powered cybersecurity software is only effective if those who use or who are subject to analysis by the data are properly trained in their responsibilities. Businesses should always bear in mind that the complex outputs that AI can potentially generate may still require a certain level of expertise to interpret correctly. Since AI is not static, insofar as it is continually learning and adapting, it is also vital to keep up to date with these changes and understand how these might affect the decision-making processes and outputs of AI-cybersecurity tools. Inadequate training could, therefore, result in overlooked threats or incorrect decisions. Even the most advanced AI system cannot prevent all cyberattacks, particularly those that exploit human error (such as phishing through unsolicited emails). Basic cybersecurity awareness remains vital for all employees, regardless of the sophistication of a business's cybersecurity arsenal.

Conclusion

The integration of AI in cybersecurity strategies presents both an opportunity and a challenge for commercial businesses. While its capacity to identify anomalies, automate responses and adapt to emerging threats provides a potent protective tool, its efficacy in this context is contingent upon a comprehensive understanding of its capabilities and limitations, robust data governance and a commitment to continuous human oversight and training. For all users of cybersecurity AI technologies, these technologies should be seen as a 'best defence' against potential cyberattacks rather than a 'total defence', with the human staff of SOCs still intervening to ensure that these tools are behaving as expected. We expect that November's AI Safety Summit will underscore the importance of balancing technological innovation with human expertise in establishing a harmonious approach to tackling the ever-shifting cybersecurity landscape.

Take a look at our AI resource hub for further information.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else