Mishcon de Reya page structure
Site header
Menu
Main content section
lit up thumb print

AI & liability - does English law need to change?

Posted on 6 February 2026

Reading time 11 minutes

In brief

  • The UK Jurisdiction Taskforce (the UKJT) has published a draft Legal Statement considering the potential novel legal issues that harms caused by AI may give rise to, and the extent to which English law can address them.
  • The UKJT takes an optimistic view of the ability of AI based harms to be addressed by existing English law principles. However, it identifies a number of conceptual and practical challenges.
  • The Legal Statement seeks to reassure readers that there is less uncertainty as to how AI harms will be dealt with by the English courts than may be thought. However, until the English courts are able to grapple with these issues through issuing judgments on particular cases, inevitably there will be some significant question marks to be considered when dealing with AI tools.
  • Views on the consultation paper are sought by 13 February 2026.

The UKJT Legal Statement

In January 2026, the UKJT (an "industry-led initiative, tasked with promoting the use of English law and UK's jurisdictions for technology and digital innovation") published a draft Legal Statement on "Liability for AI Harms Under English Private Law". This Legal Statement forms the basis for a public consultation on how English law can, and should, impose liability for losses caused by the use of AI.

The fundamental questions addressed by the Legal Statement are:

  • Can an AI itself be liable for losses it causes as a matter of English law?
  • Who else could or might be liable for such losses?
  • Does the nature of AI and AI based tools give rise to any particular difficulties or challenges in attributing liability for such losses?
  • Do the answers to these questions indicate that English law as it currently stands is able to address the role that AI may come to play in day-to-day life?

Generally speaking, the UKJT's position is that, as a common law framework, the English legal system can "provide certainty and predictability in the context of technological innovation". As such, the UKJT sees English law as well placed to adapt to the challenges of using AI, and to modify or expand existing principles where necessary to account for unique issues that arise. Moreover, the report suggests that areas of "true novelty" will be rare; in most cases, the UKJT suggests, although the factual background (i.e. the use of AI) may be novel, "the application of well-established legal principles is reasonably straightforward".

However, the UKJT also acknowledges that there is at least a current "perception of legal uncertainty" in relation to liability for losses arising from the use of AI. This uncertainty could hold back adoption of AI, and lead to potentially unnecessary expenditure on risk mitigation such as the purchase of insurance. Accordingly, the UKJT considers it is important to address this potential uncertainty pre-emptively, rather than waiting for the issues to be addressed by the courts.

What is "AI" in this context?

As the Legal Statement acknowledges, there is no universally agreed definition of AI. The UKJT therefore adopts "technology-agnostic" definition which captures the novel key characteristics of AI. The UKJT identifies "autonomy" as the key novel characteristic of AI, defining AI as any "technology that is autonomous".

The Legal Statement describes autonomous technologies as ones which operate such that: the output that results from a given input is "unpredictable"; there is an "opacity of reasoning" in relation to the output generated; and there is limited power for the user to control the tool's output. It is these features that the UKJT considers give rise to particular legal challenges when ascribing liability for harms caused by the use of AI.

Why does AI give rise to legal uncertainty?

The UKJT identifies the basis for at least the perception of legal uncertainty as being that "English private law… has never previously needed to address the capability of autonomy… other than in humans".

Historically, liability has been attributable to a legal person based on either their voluntary actions, or those of a human agent acting on their behalf. However, the UKJT starts from the premise that AI is not itself a legal person, and cannot be approached as if it were one. The question therefore becomes: how should liability be ascribed for harms caused by the autonomous actions of AI?

This question is further complicated by other aspects of the way in which AI functions. For example, the opacity of the underlying models means it is usually difficult to identify why a particular decision was made. Often, multiple parties have been responsible for developing and training an AI tool, meaning it is hard to identify who is responsible for a particular issue. In addition, AI tools may be part of complex supply chains with multiple suppliers, making it harder to identify which party is liable for any one loss.

Can English law as it currently exists account for these novel issues?

As the UKJT notes, in many cases the relationship between the users and providers of an AI tool will be governed by some form of contract, which will allocate liability between the parties. It is also likely that contractual relationships will govern upstream liability allocation (e.g. between AI model developers and AI tool developers who use those AI models in their services or products). Where liability is dealt with by contract, the novel implications of AI will be limited to questions of causation.

However, the Legal Statement examines several circumstances where AI may give rise to novel liability issues:

Negligent use of AI

The primary form of non-contractual liability the UKJT envisages might arise as a result of AI is tortious liability for negligence. As in all such claims, a key question will be whether there is a relevant non-contractual duty that can be said to have been breached.

One scenario where there may be a negligence claim arising from AI use is where a legal person (the injured party) is harmed by the use of (or failure to use) AI by another legal person (the AI user). The Legal Statement suggests that English law can already account for such a situation by applying existing principles of negligence: if the AI user owed the injured party a duty of care then they may be liable for harm suffered by the injured party if it can be shown that their actions fell short of that duty of care. The fact the harm is caused by their application of AI tools (or failure to make use of AI tools where they should have) has no bearing on this legal analysis. The Legal Statement argues that "in many cases the addition of AI will simply be considered a tool of those who exercise relevant control over it, and can be said to have been "responsible for its actions"". The UKJT gives the example of a radiologist using (or failing to use) AI tools to review medical imaging scans.

There may also be questions as to how far "up the chain" liability might travel where an AI model produced by one party is incorporated into tools developed by another party and in turn used by a third party. As the Legal Statement notes, this will be a highly fact specific question, but questions of liability in this regard in AI related claims are no different to those arising from any other tool or product.

However, negligence claims arising from the use of AI will give rise to novel questions as to whether the autonomous actions of the AI tool amount to a break in the chain of causation. If this is the case, the party that created AI model or tool may not be liable for harms caused by its output. It may also give rise to questions as to whether the behaviour of the AI tool could and/or should have been anticipated by the parties involved in developing it. Again, these will be highly fact specific issues, and are likely to require expert evidence to resolve.

False statements by AI

Another issue which the Legal Statement addresses is who might be liable for harm caused by false statements made by an AI.

The Legal Statement takes the view that such claims will generally depend on whether the particular statement made by the AI can be said to have been made "by or on behalf of" a particular legal person. As the UKJT takes the view that an AI cannot itself be treated as a legal person, claimants would need to show that the AI was making statements "on behalf" of another legal person. The same principle would apply to other claims based on statements (such as claims in defamation or deceit).

Again, this will be a highly fact-specific consideration. The authors of the Legal Statement acknowledge that in many cases this is not something that a claimant would be able to show. However, they suggest that "given that AI is generally used as a tool, the core negligence (if any) of the defendant legal person would typically be the careless acts that permitted the output, like the human decisions behind the AI tool's design, testing and deployment". In other words, the UKJT's view is that in most cases harms caused by false statements by AI will give rise to negligence claims against the developers of the relevant AI model or tool, rather than claims in misstatement, defamation or deceit.

Vicarious liability

Another key consideration is whether legal persons that develop AI models or tools which incorporate AI can be vicariously liable for any harms caused by that AI to its users.

There may well be circumstances in which the use of an AI tool gives rise to harm, but there is no negligence or other direct liability on the part of the developers of that tool. In those circumstances, the fact that, under current English law, AI does not clearly fall within the concepts of a legal person, it may make it difficult to ascribe liability to any party. If an AI tool is not itself a legal person then (under current English law principles) it cannot itself be liable for harms caused to a user by its output.

One route for attributing liability to a legal person might be by vicarious liability. However, a third party (such as the provider or developer of AI tools) can only be vicariously liable for the actions of another legal person. For example, whereas a company may be vicariously liable for the actions of one of its employees, if an AI tool is not a legal person itself then the operator of an AI cannot be vicariously liable for its actions. So, for example, a company may be vicariously liable for its employee's negligent use of AI, but cannot be vicariously liable for harms caused by erroneous AI output provided directly to a third party (e.g. a consumer).

Given that AI tools can act with a degree of autonomy, there is a potential gap in the law if there are circumstances in which neither the AI itself nor its operator can be held liable for harms arising from the AI's actions. As set out above, in many cases there will be contractual allocations of liability, but there may well be circumstances where this is not the case.

Causation

Causation is another aspect of English law that may give rise to complex issues when dealing with AI. The 'black box' nature of AI decision making (i.e. the fact that it is often impossible to discern the way in which an AI reaches any particular output) may make it difficult to demonstrate who is responsible for an output that has led to harm. This could manifest in a lack of evidence (i.e. records of the process undertaken by the AI) and/or opacity as to how the AI operates (i.e. the actual reasoning applied by the AI).

The Legal Statement takes an optimistic view on this issue, arguing that these difficulties "are neither more severe nor different in kind to the sorts of issues that arise in other domains and that the English common law is well able to accommodate". This may be the case. However, until such time as the courts grapple with these issues, it remains to be seen how much of an obstacle to claims they prove to be. Unless and until there is judicial consideration of these issues, there will inevitably be a degree of uncertainty.

Conclusion

The Legal Statement sets out a wide ranging and thoughtful analysis of how existing English common law may accommodate harms caused by AI.

The authors conclude that the flexibility of the English legal system will allow it to address the novel issues arising from the use of AI without needing specific legislation or other external developments. This, they argue, should assuage any concerns that the use of AI may result in unclear liability for its errors.

However, the Legal Statement also makes clear that there are many ways in which AI tools, and the claims that might arise from their use, are unique. As such, whilst existing law may well be able to flex to accommodate these aspects of claims relating to AI, there will remain potential uncertainty as to where liability will fall when it arises outside of contractual arrangements - until a body of case law is established.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else