Mishcon de Reya page structure
Site header
Main menu
Main content section
Data technology abstract

Addressing bias in AI systems through the AI Act

Posted on 17 April 2023

Whilst the overall aim of the EU's proposed AI Act is to protect society from the risks of AI, the Act's mandatory requirements underline the need to safeguard against bias (itself a cornerstone of protecting society), requiring organisations to clearly evidence the steps that they have taken to avoid bias (and other associated risks) at each stage of the AI lifecycle.

Bias and the EU's proposed AI Act

One of the biggest hurdles that Artificial Intelligence (AI) faces today is public trust and acceptance. People often understandably struggle to trust the decisions and answers that AI systems provide due to the lack of transparency involved in their decision-making processes. For example, how can an employer using an AI system to interview and hire employees be sure that a decision to hire one applicant over another is free from bias without being able to see how or why a decision was reached?

Regulators in some jurisdictions are now turning their attention to resolving this issue. The European Commission's proposed AI Act, which is currently going through the European parliamentary approval process, has taken steps to be more prescriptive around the broad principle of the ethical development and use of AI. The Act addresses in part, the transparency and explainability of AI systems (both regarded as requirements for ethical AI), and the requirement for creators and users to show that they have taken steps to address any risks associated with AI systems.

Whilst the focus of the AI Act is not solely on preventing bias, many of its requirements speak to this issue.

Bias and transparency

Bias, while not explicitly defined in the AI Act, occurs where results or outputs from an AI system are disproportionately in favour of or against an idea, group or person (which may lead to unlawful discrimination). Bias can occur at any point in the AI life cycle and may include:

  • Data input bias – the data itself is a source of bias e.g. it may be based on historical human bias.
  • Algorithmic bias - this can emanate from the design, development, deployment or maintenance of the algorithmic system.
  • Biased outcomes – the decision as to whether a biased output results in discrimination is often left to the user to assess i.e. by applying a biased result in a real-world situation, for example, a promotion.

The AI Act

The AI Act is a European law on AI, and will apply to:

  • organisations providing or using AI systems in the EU; and
  • providers and users of AI systems located in countries outside of the EU (including the UK), if their AI systems affect individuals in the EU e.g. their product is used in the EU.

The AI Act proposes a number of obligations that will apply to providers and users of 'high-risk' AI systems. High-risk AI systems will include AI technologies ‘which affect material aspects of people's lives’ (e.g. technologies related to education, employment, asylum and access to financial services). Compliance is also recommended, though not mandatory for ‘low-risk’ AI systems.

Steps to take

The AI Act proposes a number of mandatory requirements for high-risk AI systems (for all other AI systems these are recommended), in relation to:

  • Risk management systems
  • Data and data governance
  • Technical documentation (which must adhere to specific requirements)
  • Record keeping
  • Human oversight
  • Conformity assessments
  • Accuracy, robustness and cyber security

Whilst the overall aim of these principles is to protect society from the risks of AI, these mandatory requirements speak loudly to safeguarding against bias; organisations must clearly evidence the steps that they have taken to safeguard against bias and other associated risks at each stage of the AI lifecycle. Creators and users of AI systems should be able to explain how and why a decision was reached, with supporting evidence as far as is possible.

Typically, this has not been the case for providers and users of AI, who have often been unable (or unwilling) to explain why a certain decision was reached, when questioned on potentially biased output.

Explainability

The AI Act aims to increase the explainability of AI systems. Explainability is understood to encompass how organisations have come to use data, a model or an algorithm in a certain way, that has resulted in certain outcomes, and explain why they have followed this method. Part of the explainability requirement is that it should be communicated in plain language to the end user in a manner that is meaningful and accessible, such that they can make informed decisions regarding the output of AI, and may include:

  • identification and analysis of the known and foreseeable risks with the AI system and the steps taken to eliminate or reduce these risks.
  • accounting for the relevant design choices in the AI system e.g. the quantity and suitability of data sets and examining possible biases in data sets. 
  • maintenance of quality management systems policies, procedures and instructions, and automatically generated decision-making logs.
  • reviewing the performance and capabilities of a system throughout its lifecycle, and the accuracy of the output. 
  • human oversight to fully understand the AI system's capabilities and limitations, monitor its operation and remain aware of a possible tendency to automatically rely on the output produced by an AI system.

Through implementing these and other steps, the intention is that humans are able to correctly interpret an AI system's output. For example, these principles could enable a manager to understand the logic behind an AI system's decision to allocate certain shifts in a retail store to certain employees.  Where appropriate, they would then be able to override the system, where they have good reason to do so (although deciding not to follow an automated recommendation may, in itself, require explanation).

Traceable and transparent

Alongside explainability, traceability and transparency are core principles within the AI Act.  They are not defined terms but will relate to the keeping of records of data sets, decisions and processes, policies, and protocols that yield the AI outcome (separate from explainability in that the details may be more technical). This assists in identifying where and when an AI outcome might have gone awry. This should be for the whole lifecycle of the AI system as it aids auditability and accountability.

Transparency should involve clearly showing how a particular set of factors which determine an outcome can evidence and support the conclusion reached.

Benefits of compliance

Where a decision of an AI system is challenged as discriminatory, a user or creator of the system may need to evidence the fact that the decision was reached on non-discriminatory grounds. The more thorough the record keeping throughout an AI system's lifecycle (taking the necessary steps to ensure the AI system is explainable and transparent), the higher the likelihood a user or creator of the system may effectively assess what role a particular protected characteristic played in the AI system reaching its decision. In turn, users may potentially be capable of providing a defence to any allegation of discrimination. 

Conclusion

While the focus of the AI Act is not explicitly to avoid bias and discrimination, it effectively addresses this point by focusing on explainability and transparency.

We, therefore, recommend that organisations providing or using AI systems ensure that the technical documentation requirements within the AI Act are met to avoid the risk of discrimination occurring in practice. This may be achieved through the use of factsheets (a collection of information about how an AI model was developed and deployed), impact assessments (describing the capabilities and limitations of the system) and conformity assessments.

Without steps being taken to ensure that the AI system's lifecycle is well documented and satisfies the core requirements of transparency and explainability, unfairness and bias may creep in unnoticed, and potentially expose creators and users alike to the risk of discrimination claims.

This article is the latest in Daniel Gray's series on AI in the workplace – see more.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else