Mishcon de Reya page structure
Site header
Main menu
Main content section
AI human

European Commission announces need for AI law

Posted on 27 February 2020

With the publication of its White Paper on Artificial Intelligence (AI), the European Commission is ambitiously making the case for “a new legislation specifically on AI … to make the EU legal framework fit for the current and anticipated technological and commercial developments”.  . The White Paper sits alongside the Commission's strategy for data, and its overarching digital strategy. Whilst it repeats the previous guidance issued by the Commission, as summarised here, this is the first time that the Commission has suggested a new law to regulate the technology.  The Commission suggests that the new AI law should apply to “high risk” AI applications.

The White Paper and the strategy documents reveal the Commission's aspiration – and plans - for increasing investment in the technology, boosting research and encouraging collaboration between the public and private sectors. It acknowledges that "investment in research and innovation in Europe is still a fraction of the public and private investment in other regions of the world" and that "Europe currently is in a weaker position in consumer applications and on online platforms, which results in a competitive disadvantage in data access".  As the Commission's digital ambitions ramp up, and as the UK approaches the prospect of direct disengagement from EU laws following Brexit, it will be necessary to observe how closely domestic law and policy continue to mirror those of the EU.

Definition of “high risk” AI

According to the White Paper, an AI application will be “high risk” if:

  1. it is employed in a sector where, given the nature of the activities typically undertaken, “significant risks can be expected to occur”. The Commission says the regulatory framework should specifically and exhaustively list the applicable sectors (likely sectors include healthcare, energy and transport), with the possibility of amendment where necessary; and
  2. the AI technology is "used in such a manner that significant risks are likely to arise". The level of risk could be based on the impact on the affected parties, with significant risks including injury, death, or effects that cannot be reasonably avoided.

In exceptional circumstances, the use of AI may be considered high risk due to the risks at stake, including in employment recruitment processes.

Legal requirements for “high risk” AI

In terms of assignment of liability, the Commission believes that each obligation should be addressed to “the actor who is best placed to address any potential risks”.

Prior conformity assessment should verify and ensure that the mandatory requirements are complied with, and may include certifications, and checks of the AI algorithms and data sets used in the development phase. Such a process is unlikely to be quick and so will need to be proportionate, to avoid stifling investment and innovation in the space. As AI learns and develops over time, repeated assessments may be required, and national authorities will be encouraged to monitor and enforce compliance.

Ethical requirements for “high risk” AI

High risk AI would also need to be subject to certain of the proposed ethical guidelines for AI, published by the High-Level Expert Group on AI, summarised here. In fact, The White Paper suggests transforming this assessment list of ethical guidelines into a “curriculum” for AI developers.

The requirements for “high risk” AI are:

  1. training data – data sets used to train AI will need to meet  requirements aimed at providing reasonable assurances on the quality of data used, and how EU data rules and fundamental rights are respected (including around privacy and discrimination);
  2. record keeping -  records must be kept in relation to the programming of the AI algorithm, the data used to train high risk AI systems and potentially the retention of the data sets themselves. This will allow tracking and verifying of potentially problematic actions by AI;
  3. information provision – transparency is required to promote the responsible use of AI and to build trust. Adequate information should be provided as to the AI system's capabilities and limitations, how the system is expected to function, and the expected level of accuracy. Citizens should be told when they are interacting with an AI system;
  4. robustness and accuracy – high risk AI applications must be technically robust and accurate, ensure that their outcomes are reproducible, adequately deal with errors or inconsistencies, and be resilient to attacks;
  5. human oversight – the type and degree of human oversight may vary depending on the use case, and the effects on affected citizens and legal entities. This will be without prejudice to the legal rights provided by GDPR when AI processes personal data. For example, the AI system may need to be reviewed by a human to validate its decision, or human intervention in real time may be required; and
  6. remote biometric identification – whilst GDPR offers some protection in relation to processing of biometric data, the Commission acknowledges public concerns over such use (e.g. facial recognition technology) and will launch a “a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards”.

Requirements for “low risk” AI

Low risk” AI will not be subject to the mandatory requirements discussed above. However, the Commission is considering a voluntary labelling scheme, under which interested operators and developers could decide to make themselves subject, on a voluntary basis, to all or a specific set of requirements. This would signal that their AI products and services are trustworthy.

Whilst voluntary, once a decision has been made to use the label, the requirements would be binding.

Changes to product liability rules

In addition to the White Paper the Commission at the same time published a report on the safety and liability implications of AI, the Internet of Things and robotics. This report focuses on the current product liability legislative framework and makes a number of suggestions to make that framework fit-for-purpose for these technologies.

These include:

  1. ensuring that mental health risks that could derive from users of AI applications are explicitly covered in the concept of product safety;
  2. clarifying the definition and scope of a "product" under the Product Liability Directive;
  3. alleviating/mitigating the burden of proof required by national liability rules for damage caused by the operation of AI applications, through an appropriate EU initiative;
  4. whether there should be strict liability for AI applications with a specific risk profile, as it exists in national laws for similar risks to which the public is exposed i.e. operating motor vehicles, airplanes or nuclear power plants. 

Application of new AI law

The Commission is plainly determined to provide a united European approach to AI, to avoid fragmentation in the market and regulatory arbitrage as the White Paper states that all requirements should be applicable to relevant economic operators providing AI products or services in the EU, regardless of whether they are established in the EU. This extra-territorial ambition reflects that already found in GDPR, and is an ambitious attempt to extend Europe's influence beyond its borders.

It is likely, therefore, whatever the outcome of the trade negotiations between the UK and EU, that any UK based developer, or user of, AI whose technology will affect European citizens, will need to meet the EU's requirements for the technology.

Comment

The Commission's underlying ambition appears to be to ensure that EU businesses are given support and opportunity to become global leaders for these emerging technologies, but to do so in a way which respects the fundamental rights of EU citizens.

Given the current rate of progress and inherent opacity that is a marked feature of AI, the Commission will have to move incredibly quickly to shore up the EU's global position in a way that achieves this balance. In reality it is likely that these rules and their application are set for long-term negotiation and lobbying. In the meantime, the Commission may have to rely on other tools and measures to promote and protect European businesses from their US and Chinese rivals. Investment is an obvious measure, but we are also likely to see far more aggressive enforcement by the Commission with regard to EU competition rules.

The Commission is also assessing whether national data protection authorities have the requisite funding and resource required to adequately enforce and monitor GDPR compliance. Again, this could be a signal of the Commission's wider digital strategy and a mechanism to achieve its balancing act.  

Interested parties are invited to comment on the White Paper and accompanying reports. The deadline for submissions is 19 May 2020.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else