In brief
- The UK Government has published its report on GenAI and copyright. The report confirms there will be no immediate reform to UK copyright law, with the previously proposed text and data mining (TDM) exception with rights holder opt-out no longer the Government's preferred option; instead, the Government will gather further evidence and engage with industry whilst monitoring international developments, before making further proposals.
- A number of other key issues - including input and output transparency, technical standards, and the role of licensing - remain unresolved, with the Government favouring further stakeholder engagement before reaching final decisions.
- Copyright protection for computer-generated works appears likely to be removed, albeit this decision has been deferred pending further monitoring, and a new consultation on deepfake harms - including the possible introduction of a personality right - is expected this Summer.
The UK Government has published its report and impact assessment on generative AI and copyright. The report follows its consultation at the end of 2024, discussed in our article: UK Government consultation on copyright and AI: A 'win-win'? (publication of the report was also a statutory requirement under the Data (Use and Access) Act 2025).
The report arrives against a backdrop of profound and rapid development relating to GenAI models and their impact, accompanied by unprecedented levels of litigation — there are now close to 100 cases in the US, with cases also in Europe (including the first reference to the European Court of Justice) and elsewhere (sign up to our tracker to receive updates and details of new cases). Alongside this, both direct licensing arrangements and voluntary collective schemes are starting to emerge, in part no doubt due to litigation pressures.
The Government faces a difficult challenge. It needs to meet its desire to protect the UK's creative industries, whilst also providing a platform for investment in the UK into AI-driven innovation. It is perhaps not surprising therefore that it has decided to take more time to get these decisions "right". However, the lack of decisive action, with references to further consultations, formation of working groups and monitoring international developments, means important issues remain unresolved, which brings with it its own uncertainty (including potentially further litigation).
Will the UK introduce a copyright exception for AI training?
No, at least not yet. The main headline in the report is that there will be no immediate reform to UK copyright law, and the Government no longer sees implementation of a text and data mining (TDM) exception with rights holder opt-out as its preferred option (as initially stated in its consultation). Instead, it plans to gather more evidence on how copyright laws are impacting the development of AI in the UK, and engage with industry on issues such as input transparency and technical tools and standards, all whilst keeping an eye on what happens internationally.
The proposed TDM exception would have operated in a similar way to the one currently in place in the EU under the Digital Single Market Copyright Directive. But, as many observers have noted, the EU's opt-out framework has presented both technical and legal challenges. Given the gaps in the evidence base, it is therefore perhaps no surprise that it is no longer the Government's preferred option. Whilst rights holders will be pleased at this, it is worth noting that it does not appear to have actually been abandoned. In contrast, the Government has more categorically stated its intentions not to introduce a broad TDM exception with no rights holder opt out, at this time. If the further evidence the Government intends to gather, alongside international developments, suggest that intervention is needed, it seems that a TDM exception with opt out may still be an option on the table. Indeed, other options may yet emerge, such as more focused exceptions such as ones based on the use of content for 'science and research'.
For now then, the status quo will continue which in itself raises a number of questions. In particular, a large part of the current complexity comes from the territorial nature of copyright, with most of the AI models in use in the UK being trained elsewhere, typically in the US, where copyright laws may be more permissive towards the use of copyright works in AI training, due to its fair use framework (although this too remains to be resolved). Indeed, the status of AI tools that have been trained overseas was identified in the Data (Use and Access) Act as a particular topic that the Government was required to consider in this report.
The most popular option in the Government's consultation was, in fact, the option that called for a stronger copyright framework requiring licensing for all AI development, including for models trained elsewhere. The EU does seek to take a 'market access' approach under the EU AI Act - if a General Purpose AI (GPAI) model is put on the market in the EU, it must comply with EU copyright law and its requirements on transparency. The UK Government meanwhile seems inclined to let the litigation between Getty Images and Stability AI run its course before giving any answer as to what it intends to do in relation to overseas-trained models (the case will be heard by the Court of Appeal later this year). As a result, we are no closer to a resolution of this issue and, for now, it is being left to the UK courts to determine, on the basis of copyright laws which were passed in a pre-AI age, and which could be subject to significant change in the future, albeit with the report giving no indication of in what ways.
What other issues remain to be resolved?
There are a number of other important topics upon which the Government has not been able to reach a firm position. On these, it says it will work with industry and other experts to develop best practices, whilst also monitoring international developments and keeping the need for regulation under review.
In relation to input transparency, rights holders have already sent a very clear message that they want more, and more granular, transparency on the use of their copyright works to train AI systems, and that they consider this must be mandatory rather than voluntary. The report notes that AI companies are not necessarily against transparency, but they prefer it to be at a high level and industry-led.
Again, the EU experience is instructive: providers of GPAI models are required under the EU AI Act to provide training data summaries to access the EU market, with enforcement set to begin this August (though reports suggest that many AI companies are not yet publishing their training data summaries).
As for output transparency (requiring certain AI-generated works to be labelled as such), this also falls under the heading of further industry engagement, albeit Liz Kendall, Secretary of State for Science, Innovation and Technology has stated that an interim report will be published on output labelling in the autumn. Meanwhile, development of technical tools and standards is seen as largely a matter for the market to progress, with the Government playing a supporting role.
Will the Government intervene in AI licensing?
On licensing, rights holders and AI companies are largely aligned: licensing is a commercial negotiation between private parties, and the Government should not intervene. Indeed, direct licensing deals and collective frameworks are beginning to emerge though, as always, there is a risk that some will be left behind. The Government therefore sees its role as putting market conditions in place to enable licensing to flourish, including through, as noted above, best practices on input transparency and technical tools and standards.
The Government also highlights its Creative Content Exchange (CCE), announced in the Creative Industries Sector Plan last year, which is intended to be a 'trusted marketplace' for dealing in digitised cultural and creative assets. The CCE has already been working in pilot phase with a small number of museums, and a pilot platform should be in operation by the summer.
Will copyright protection remain available for computer-generated works?
The UK is one of the few jurisdictions to provide for copyright protection in relation to computer-generated works (CGWs), though the boundaries of the relevant provisions in relation to AI-generated works are yet to be tested.
In its consultation, the Government indicated that it was minded to remove this protection, unless there was sufficient evidence of its positive effects. Most respondents who answered this question agreed, particularly given concerns about AI content undermining human creators. From the report, it does appear that protection for CGWs will ultimately be removed but, for now, the Government will again monitor its use and impact (it is not immediately clear why the final decision on this question has been deferred).
This will not, however, leave a gaping hole in the legal framework: provided they meet the test for originality, AI-assisted authorial works remain protected, as are entrepreneurial works such as sound recordings, even if generated by AI (as these do not have an originality requirement). It is also worth noting that the treatment of computer-generated designs is also an issue in the Government's consultation on the design law framework.
Will the Government take action against deepfakes?
Whilst the issue of deepfakes — in more neutral terminology, 'digital replicas' — was only introduced as something of an aside in the consultation, the Government is clearly alive to concerns about problematic uses of individuals' likenesses. It therefore wishes to understand the case for giving individuals, from public figures and creative professionals to the general public, better control over how their likeness, voice or personality can, and cannot, be digitally replicated, whilst also recognising that AI innovation can bring opportunities.
As such, it will introduce a further consultation this summer on how to address deepfake harms whilst protecting legitimate innovation. This will include whether to create a new personality right, raising questions that extend beyond IP law into areas such as data protection and privacy. Individual performers will wish to engage closely with that consultation process.
What is happening in the wider international landscape?
The interplay between GenAI and IP rights is a complex issue, and one that lawmakers around the world are grappling with. In the US, the White House has just published its National Policy Framework for AI. Whilst the Administration states its view that training of AI models on copyrighted material does not violate copyright laws, it acknowledges that there are contrary views and so it supports the courts being able to resolve the issue.
In the EU, MEPs in the European Parliament have recently passed a non-binding resolution on copyright and generative AI, doubling down on extraterritorial application over any tool put on the EU market, regardless of where it was trained, and proposing the Commission create a new licensing market, including sectoral voluntary collective licensing, and with a central role for the EUIPO in managing licensing and opt-outs. All of this must also be seen in the wider context of the Copyright in the Digital Single Market Directive itself being up for review this Summer, which may see the EU revisit its own TDM framework.
Conclusion
With the publication of the Government report, we remain in something of a holding pattern, with it also seeming unlikely that immediate answers will emerge from litigation any time soon. The debate will continue, with the coming months bringing further consultations, stakeholder engagement, court decisions, and evolving commercial licensing arrangements. This requires active and early engagement with these processes in order to shape the future AI/IP environment.