The UKIPO has published the Government's response following its 2020 call for views on the role of intellectual property (IP) in the development of artificial intelligence (AI), and the interplay between the two. Whilst there were mixed views in a number of areas, the prevailing view was that AI itself should not own IP rights and that, in many areas, the present IP framework is able to meet the challenges presented by AI. However, there are a number of key issues to consider further arising out of the important role IP has in relation to incentivising AI innovation – particularly in relation to patents and copyright - with the Government committing to consultations, research and engagement:
- There will be a consultation later in 2021 on a range of possible policy options, including legislative change, in relation to protection of AI generated inventions (where those inventions would not otherwise meet inventorship criteria).
- The UKIPO will publish enhanced guidelines on patent exclusion practice for AI inventions, and will engage with stakeholders, and take into account practice at the European Patent Office.
- The UKIPO will commission an economic study to enhance the understanding of the role the IP framework can play in incentivising investment in AI alongside other factors, involving international evidence and engagement with other Government departments.
- The UKIPO will consider the feasibility, costs and benefits of a deposit system for data used to train AI systems disclosed within patent applications.
- There will be a review of the ways in which copyright owners license their works for use with AI, together with a consultation on measures to make this easier, and to support innovation and research.
- There will be a consultation on whether copyright in original works should be limited to human creations, and on whether the existing protection for computer-generated works should be replaced with a related right, which would have a scope and duration reflecting investment in such works. This will also include consideration of whether action should be taken to reduce confusion between human and AI works, and the risk of false attribution.
- A range of collaborations and engagements will be adopted with other states, organisations, partners and universities, as part of a broader strategy on IP and AI.
- A report on research into AI and IP enforcement and the opportunities/challenges will be published in Autumn 2021.
- The UKIPO will also continue to identify opportunities to integrate AI into its operations, such as its trade marks Pre-Apply service, which allows users to analyse their chances of successfully registering a trade mark.
Whilst IP rights such as trade marks, designs and trade secrets are not seen as presenting a pressing need for policy change, the Government will continue to monitor how those rights respond to the challenges of AI in practice. There are undoubted complexities in the relationship between trade marks and AI, for example, but the steer in the responses to the call for views was that AI is not yet developed enough to impact core trade mark concepts and so the existing legislation remains fit for purpose.
More detail on the Government's response to the Call for Views is set out below.
The Call for Views considered the role of patents in promoting innovation in AI, and the use of AI itself in the innovation process. The questions touched on themes such as:
AI as an inventor?
Whilst there was a consensus that AI should not own IP rights, there were a range of views on the crucial issue of inventorship, for example the extent to which current AI systems can devise inventions, without human involvement. For sceptics, the view was that inventorship criteria should not change, given that 'AI generated' inventions are already patentable. However, other respondents argued that the approach to inventorship criteria potentially had a detrimental impact on innovation, including in relation to transparency in the innovation process – because such inventions might go unpublished, leading to disincentives to invest in such technology, and in research and development. In the light of its aim to ensure IP systems can support and incentivise AI generated inventions, and to ensure transparency, the Government will consult later this year on a range of possible policy changes for protecting AI generated inventions, which would otherwise not meet inventorship criteria.
We reported last year on the Patents Court decision in Thaler v Comptroller-General, where the Court held that, under the current law, AI systems (here, the DABUS AI machine) cannot be acknowledged as an inventor. A similar outcome has been reached before the EPO and USPTO. An appeal in the Thaler case is listed in the Court of Appeal in July 2021 and so it will be interesting to see how the Court deals with the issues raised, given the backdrop of the Government's planned consultation.
Conditions for grant of an AI patent
Although the conditions for granting AI patents in the UK were seen by respondents as generally fit for purpose (including in relation to disclosure and inventive step), a significant area of concern focussed on the rules relating to exclusions from patentability, seen by many as a barrier against protection for 'AI inventions', thereby decreasing incentives to innovate. This is particularly the case for developments in the AI system itself ("core AI invention"), as opposed to AI generated inventions, where a number of patents have already been granted in the UK. The EPO's approach is seen as more permissive and therefore giving a better outcome for AI patent applications, whereas in the UK it is much harder to predict the outcome of an AI patent application.
However, others felt that the UK's approach provided a good balance, and a more liberal grant of patents for AI inventions would risk destabilising this balance. It may be that this issue requires only a change in UKIPO practice, as opposed to a change to UK law and so the UKIPO will publish enhanced guidelines on its patent exclusion practice, and will engage with stakeholders, and identify any differences in outcome with the EPO.
AI presents various issues in relation to patent enforcement – who is liable when AI infringes, and how would infringement by AI be established, and where? Most respondents agreed that a "legal person" should be liable for infringement. As the courts have the appropriate flexibility to deal with infringement cases, the Government does not intend to intervene in this area.
Copyright overlaps with AI technology in two ways: first, copyright exists in the materials used to "train" AI algorithms and, secondly, in relation to whether copyright subsists in AI generated works.
The use of copyright works and data by AI systems
AI algorithms are trained and developed using data. Given the complexity of the underlying technology, it is difficult for users of AI to predict how the technology will use and manipulate the data sets. This therefore raises the question of whether the AI technology infringes any copyright in the data and other materials provided to the software. Overall, copyright owners felt the law adequately dealt with the infringement of copyright works to train AI technology. However, some respondents sought greater clarity over who is ultimately liable if AI infringes copyright, given that AI itself is not a legal person. Greater awareness and education should be provided to identify who is liable for any copyright infringement by the software, and a suggestion was made that the UK’s enforcement regime should allow copyright owners to act against large-scale infringement.
Separately, respondents raised the issue of introducing a new licensing regime for data used to train the AI algorithm, pursuant to which AI users would need to pay to access text and data sets. This would remunerate copyright owners whose works are used to train the technology. This was resisted by those who create and use AI. Such respondents argued that a licensing regime would limit any activities to the scope of the licence and unfairly prejudice start-ups and SMEs who may not be able to afford the associated costs.
The Government has indicated that it will further investigate a copyright licensing framework for the use of works to train AI algorithms.
Protecting works generated by AI
The position under English law as set out in the Copyright Designs and Patents Act 1988 (CDPA), is that copyright in an original artistic work is owned by the author (unless assigned in writing). However, the CDPA did not of course foresee creations of AI. Authorship (and so first ownership) of the copyright in computer generated content is dependent on the person that undertakes the arrangements necessary for the creation of the work. In the context of AI devices, and assuming the output is "original", it is not clear who undertakes these necessary arrangements and so it is not clear who is the author and first owner of any copyright in the work produced.
Many respondents believed that AI generated works should be eligible for copyright protection, with the copyright residing with the owner or the user of the AI system (and not the AI technology itself). These respondents believed AI generated works are sufficiently covered by the existing provisions in the CDPA.
However, other respondents questioned how the requirement of originality is met when AI technology is used, particularly since current law links originality to human creativity. Many stressed the importance of putting human creators first, above machine use, arguing that works created solely by AI should not benefit from copyright protection or should be the subject of a separate category of right. The primary reason for this is that intellectual property rights are used to protect and incentivise creators, something which machines do not require.
The Government has concluded that the current legal framework is unclear and that, where a work is created by a machine free of human input, the threshold for originality is unclear. The Government has further concluded that any framework should not undermine "copyright’s central role in rewarding artistic expression and talent".
There will be a consultation on whether to limit copyright in original works to human creations (including AI-assisted creations) and whether to replace the existing protection for computer-generated works with a related right. The Government will also consider whether action should be taken to reduce confusion between human and AI works, and the risk of false-attribution, to stop humans falsely attributing AI-generated works to themselves in order to claim protection.
The legal concepts, including the 'average consumer'
Traditional legal concepts within trade mark law are based on the premise of human interaction with a brand in making a decision on a purchase. The legal concept of 'average consumer' is the instrument traditionally used to establish whether a likelihood of confusion between the signs exists.
The responses acknowledged that AI systems could affect consumers' decision on purchases. Some respondents thought AI could result in consumers contributing less to their purchase decisions, for example through e-commerce, automatic restocking and 'smart' home devices. However, AI was considered unlikely to completely replace humans in their purchasing experience in the near future, and some felt that it may never do so.
Respondents agreed that AI remains a purchase assistance tool and is not sufficiently developed to challenge the concept of 'average consumer'. Commentary on other legal concepts such as the test of 'imperfect recollection' and 'likelihood of confusion', and their relevance to AI were also considered speculative, although some aspects of trade mark law might need readjusting if AI gains greater involvement in consumers' decisions.
The Government has therefore concluded that that whilst AI is not yet sufficiently developed to warrant a reconsideration of core trade mark law concepts, the impact of AI should be monitored and assessed further as the technology develops in the medium to long term.
Infringement and liability
A further theme explored whether AI actions can amount to trade mark infringement and if so, whether they could be considered 'use in the course of business'.
The position under the Trade Marks Act 1994 (TMA) is that it is a 'person' who commits an infringing act. Respondents agreed that AI systems themselves cannot be liable for trade mark infringement as they are not currently assigned a legal personality. Respondents agreed that AI could constitute a tool or medium for infringement, with the debate revolving around the responsible entity behind the infringing act. Suggestions included attaching liability to the AI operator, the data provider, or brand owners, as well as considering contractual provisions between parties, the concept of 'contributory infringement', and the possibility of AI systems operating without apparent control.
The Government accepts that, whilst AI may be capable of infringing acts, it should be considered a tool operating under human control and that liability lies with a legal person. Developments in AI technology and the impact on trade mark law will be kept under review and could, of course, be considered by the courts on a case by case basis.
Ownership / authorship
Most respondents agreed that AI cannot own a design as it does not have a legal personality, and that AI should not be able to have such rights. The Registered Designs Act 1949 (RDA) provides that, where a design is generated by a computer, the person making the arrangements for the creation of that design will be considered the author, and some respondents suggested these provisions adequately deal with AI generated designs. The approach taken here will mirror that under copyright and so there may be further developments as part of the proposed further consultation on copyright.
The Government agrees that AI systems should not have authorship or ownership rights over a design.
As with trade marks, most respondents agreed that, whilst AI may be capable of carrying out some infringing acts, it is not a legal person and liability for infringement should attach to the operators of the AI.
Again, as for trade mark concepts, respondents generally agreed that AI should not impact on the 'informed user' test, which is a broad and flexible one.
Again, whilst the use of AI in generating designs is clearly a developing area,
The Government concludes that no legislative changes are necessary at this time. Issues for future consideration include re-assessing the suitability of the legal framework and legal tests as AI technologies develop, ownership of AI generated designs, and liability for infringement.
Trade secret protection is clearly of considerable importance for the AI sector, given its flexibility and simplicity. For AI innovations, trade secret protection may often be preferred over patents – to avoid the disclosure requirement and given the difficulties in reverse-engineering such innovations. The responses to the call for views concluded that no changes are required to trade secret protection in relation to AI, and so the Government does not plan to amend such laws. However, given the importance of trade secret protection for AI and other similar technologies, and the interplay with patents, it will continue to monitor developments.
AI does not function in a vacuum. AI requires a "holistic approach" to the legal requirements, drawing upon the expertise of those practising in different legal specialisms and jurisdictions. Whilst the Government's call for views did not touch on ethical oversight in any detail, it is important to understand that AI sits within an ecosystem of many parts and so we should consider not just the law we apply to AI but also the ethical principles and standards which frame AI. In adopting such an approach, we will truly be able to see the potential harms of AI (ethics) which will enable us to more easily mitigate and protect against such harm (law).
Clearly, the Government recognises that there is a strong economic dimension to the patent system's approach to AI invention, as demonstrated by its commissioning of an economic study into this aspect. One point not addressed in the Government's response is the impact of Brexit, and whether the UK may seek to establish a competitive advantage in terms of incentivising AI. As EU Member States get ready for the implementation deadline of 7 June 2021 for the Digital Single Market Copyright Directive, it will be interesting to see how the UK develops its approach in relation to issues such as text and data mining exceptions, one of the areas identified for further consultation.