Mishcon de Reya page structure
Site header
Main menu
Main content section

Regulators begin to set boundaries for Artificial Intelligence

Posted on 8 January 2020

2019 was quite a year for Artificial Intelligence. The European Commission, the UK's Information Commissioner's Office (the ICO) and the World Intellectual Property Organisation (the WIPO) all made significant strides in expressing their thoughts about how this technology should be regulated. We outline the key findings from each of the bodies below.

The Independent High Level Expert Group on Artificial Intelligence - Ethics Guidelines for Trustworthy AI

The European Commission established the High Level Expert Group on Artificial Intelligence to prepare a set of AI ethical guidelines and investment recommendations. These voluntary guidelines are addressed to all stakeholders designing, developing, deploying, implementing, using or being affected by AI.

The Group concludes that there are three components for "Trustworthy AI" which should be met throughout an AI system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations, (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, should not cause unintentional harm and should perform in a safe, secure and reliable manner.

The following ethical principles should be respected in the development, deployment and use of AI systems:

  • develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. Any AI practitioners should always strive to adhere to these principles,
  • pay particular attention to vulnerable groups and to asymmetries of power or information, including ensuring that individuals are free from unfair bias, discrimination and stigmatisation and that individuals can contest and seek effective redress against any decisions made by the AI system,  and
  • acknowledge that AI systems pose certain risks and may have negative impacts which may be difficult to anticipate, identify or measure (and include measures to mitigate these risks when appropriate and proportionate to the magnitude of the risk).

The development, deployment and use of AI systems should meet the following non-exhaustive requirements for Trustworthy AI: (1) human agency and oversight (including giving users the knowledge to comprehend AI systems and to override a decision made by a system), (2) technical robustness and safety (including resilience to attack, a fall back plan, accuracy and reproducibility), (3) privacy and data governance (including respect for privacy and integrity of data), (4) transparency (including traceability, explainability and being identified as AI systems), (5) diversity, non-discrimination and fairness (including the avoidance of unfair bias, accessibility and stakeholder participation), (6) environmental and societal wellbeing (including sustainability, social impact and democracy) and (7) accountability (including auditability, minimisation and reporting of negative impacts, trade-offs and redress).

The Guidelines provide detailed explanations of each requirement, and suggest technical and non-technical methods to achieve each of them.

The Group also emphasises the importance of communicating, in a clear and proactive manner, the AI system's capabilities and limitations and facilitating the traceability and auditability of the systems. The Guidelines include a draft, non-exhaustive list of steps which AI practitioners should take to achieve Trustworthy AI - the revised list will be published in early 2020.

The Guidelines should be considered throughout the lifecycle of the AI system, with ethics a core pillar for developing AI, "one that aims to benefit, empower and protect both individual human flourishing and the common good of society".

The Group has published its second deliverable, the AI Policy and Investment Recommendations, and additional sectorial guidance, complementing the Guidelines, will be explored in the future.

The European Commission's Liability for Artificial Intelligence and other emerging digital technologies

AI systems are incredibly complex: they may be developed or modified by machine or deep learning, they are affected by data collected by the system or added by external sources, they are increasingly autonomous and unpredictable and they are a hybrid of hardware, software and continuous software updates (all of which may not necessarily have been supplied by the original manufacturer). These factors make it incredibly difficult to identify who might be liable for any damage caused as a result of the AI system. Furthermore, the algorithm's decision-making process may not be readily explicable for the chain of causation to be identified.

The European Commission's Expert Group on Liability and New Technologies has published a highly detailed report, examining how liability for artificial intelligence (and other emerging technologies) should be handled. Some of the key findings from the report are:

  • in certain circumstances, it may be appropriate to impose strict liability for damage caused by emerging technologies. Strict liability should lie with the person who is in control of the risk connected with the operation of the emerging technology and who benefits from its operation,
  • operators of AI technology should abide by duties to properly select, monitor and maintain the technology. Manufacturers should design, describe and market AI technology in a way which allows operators to comply with these duties, and manufacturers should adequately monitor the AI technology after putting it into circulation,
  • if harm is caused by autonomous technology, which is used in a similar way to a human auxiliary, the operator's liability should correspond to the existing vicarious liability regime. Once autonomous technology outperforms human auxiliaries, the benchmark for assessing vicarious liability will be determined by the performance of comparable available technology which the operator could be expected to use,
  • manufacturers of products or digital content incorporating emerging technology should be liable for damage caused by defects in their products, even after the products are placed on the market, so long as the manufacturer was still in control of updates or upgrades for the technology,
  • compulsory liability insurance is recommended for situations exposing third parties to increased risk of harm. Compensation funds may be used to protect victims who are entitled to compensation but whose claims cannot be satisfied,
  • emerging technologies should possess, where appropriate and proportionate, logging features to record information about the operation of the technology ("logging by design"), and failure to log or provide reasonable access to logged data should result in a reversal of the burden of proof so as not to be detrimental to the victim,
  • where two or more persons cooperate to provide different elements of the AI technology, they should be jointly and severally liable in relation to the victim,
  • destruction of the victim's data should be regarded as compensatable damage, and
  • it is not necessary to give devices or technologies legal personalities, as the harm which they may cause can, and should, be attributable to existing persons.

At this stage, the report does not provide specific proposals as to how these recommendations should be implemented into EU or national laws. Whilst the report makes the findings above, it notes that "it is impossible to come up with a single solution suitable for the entire spectrum of risks", given the diversity of emerging technologies. Multiple liability regimes are therefore required. The report also notes that any potential liability regimes must consider what impact their introduction may have on the advancement of emerging technologies.

The ICO's AI Auditing Framework

From March 2019 until October 2019, the ICO invited organisations to comment on the development of an auditing framework for AI. The ICO aims to publish its formal consultation paper by January 2020, with the final AI auditing framework in place by spring 2020.

The ICO's audit framework will have two key components: (i) governance and accountability and (ii) AI specific risk areas. The governance and accountability component will discuss the measures an organisation must have in place to be compliant with data protection requirements. The ICO states that it "will expect the level of governance and risk management capabilities to be commensurate and proportional to their AI data protection risks."

The second component will focus on the following seven data protection risks:

  1. Fairness and transparency – including issues of bias, discrimination, interpretability of AI applications and explainability of AI decisions to data subjects. Organisations should be aware of the two main reasons why AI systems may lead to discrimination: (i) inaccurate and imbalanced training datasets and (ii) training data reflecting past discrimination. Appropriate safeguards, technical measures and regular testing of anti-discrimination measures should be implemented during the design and build phase of the AI system to minimise the risk of discrimination. The ICO and the Alan Turing Institute have published preliminary guidance for organisations to help them to explain to individuals the processes, services and decisions delivered or assisted by AI.
  2. Accuracy – accuracy is especially important for AI systems as, if AI systems use or generate inaccurate personal data, this may lead to the incorrect or unjust treatment of a data subject. Organisations should therefore adopt appropriate accuracy measures when building and deploying AI systems (e.g. adopting common terminology to discuss accuracy performance measures) and consider any potential accuracy risks as part of a Data Protection Impact Assessment (DPIA). Accuracy measures should be tested throughout the AI lifecycle.
  3. Fully automated decision-making and models – Article 22 GDPR requires organisations to implement suitable safeguards to protect individuals' rights, freedoms and legitimate interests when processing personal data to make solely automated decisions that have legal or similarly significant impact on individuals. The ICO's guidance states that the extent to which humans review and amend an AI system's decision, before a final decision is made, is the key factor of whether a decision is "solely" automated and so whether Article 22 GDPR applies. Where Article 22 does apply, examples of potential safeguards which can be implemented include undertaking a DPIA and providing sufficient information about the AI system to data subjects so they can decide if they would like human intervention.
  4. Security and cyber – including testing and verification challenges, outsourcing and re-identification risks. Given the complex data processing by AI, the high volume of personal data involved and AI's interaction with other technologies, the ICO advises organisations to review security risks and risk management practices to ensure personal data is secure in an AI context. Technical teams should record and document all movements and storing of personal data to help organisations apply appropriate security risk controls and monitor their effectiveness. The ICO is currently developing further security guidance, which will be published in due course.
  5. Trade-offs – organisations should identify and assess trade-offs (e.g. more data can make AI systems more accurate but collecting more personal information erodes privacy) and strike an appropriate balance between competing requirements. The right balance will depend on the specific context, the environment in which the business operates and the impact on data subjects. However, organisations will need to be accountable for their decisions and should document their considerations and choices in a DPIA. The ICO's guidance provides an overview of the five notable trade-offs. Finally, organisations should be prepared to stop the deployment of any AI system if it is not possible to achieve an appropriate trade-off.
  6. Data minimisation and purpose limitation – AI systems require large amounts of data. AI systems also create multiple copies of personal data that are stored in different locations and are shared with multiple persons. Any personal data used should be adequate, relevant and limited to what is necessary for the purpose for which it is processed. Once personal data files are no longer needed, they should be deleted or subject to de-identification techniques. The ICO's guidance includes examples of privacy preserving methods to help organisations meet the data minimisation principle.  
  7. Exercise of data subject rights – under GDPR, individuals have a number of rights relating to their personal data. Given the large volumes of personal data used by AI systems, organisations may find it challenging to identify individuals' personal data throughout an AI system's lifecycle.
WIPO Begins Public Consultation Process on Artificial Intelligence and Intellectual Property Policy

The WIPO has invited comments on its draft issues paper concerning the impact of AI on IP policy until 14 February 2020. The revised report will be published later on in 2020. As previously noted, there are numerous challenges which AI presents to Intellectual Property rights.

The issues paper outlines the following issues in respect of AI and IP rights:

  • patents – the report notes that AI is increasingly generating inventions and queries whether autonomously AI generated inventions are patentable, if AI could (and should) be the named inventor and what should happen to ownership of AI generated inventions. The report also asks how AI generated inventions should be disclosed (e.g. is the disclosure of the AI algorithm sufficient?),
  • copyright – copyright is intimately associated with protecting human creativity. Now that AI can generate literary, musical and artistic works autonomously, the copyright protection regime must somehow deal with machine-generated works. For example: where should copyright ownership reside? Should AI be given legal personality so that copyright can vest in the AI system? Should a separate copyright system (e.g. with reduced term of protection) apply to AI generated works? The paper also notes that AI systems learn from datasets and queries whether this could (and should) be an infringing act,
  • data – given the importance of data to AI, the report queries whether existing IP rights for data (e.g. database rights, confidentiality and authorship and inventorship) are sufficient or whether IP regimes should be go further to protect data, given the importance of data to AI systems, and 
  • designs – in the case of AI-generated designs, the questions and considerations are similar to those that arise with respect to AI-generated inventions and AI-generated creative works.

The report also notes that IP ownership has been generated so as to provide incentive for human investment and work, and queries how AI ownership of IP rights will affect the human generation of works. The paper also includes an overview of how AI will affect accountability for IP administrative decisions, the technology gap between member states and how to build more capacity to contain or reduce that gap.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else