Mishcon de Reya page structure
Site header
Main menu
Main content section
abstract blue lights

Has analysis of AI systems' potentially discriminatory impact been too narrow?

Posted on 30 June 2023

In the context of AI systems and discrimination, the focus has primarily been on flawed input data potentially creating an indirect discrimination risk. In this article we explore whether this focus has been too narrow, with the risk of direct discrimination by AI systems needing more attention.

Indirect vs direct discrimination

The direct discrimination test in the Equality Act 2010 focuses on the reason for the harm; is there less favourable treatment because of an individual's protected characteristic (their sex, race, age etc.) By contrast, indirect discrimination focuses on the effects of an apparently neutral provision, criterion or practice ("PCP"), and whether this negatively affects a group sharing the same protected characteristic (and the individual claimant within that group) in a way that isn't objectively justified.

Up to now, AI systems have typically been viewed as applying neutral PCP's. In other words, it's been assumed that they apply the same rules to everybody, with the focus on whether automated decisions using such systems have a disproportionate impact on persons with a given protected characteristic. As such, discrimination risk has typically been assessed through the lens of indirect discrimination.

However, it is also possible that the reason an AI system makes a particular decision (whether deliberately or unwittingly) is because it has been trained to take into account an employee's protected characteristic(s). If so, less favourable treatment by the AI system would be direct discrimination. For example, a facial recognition tool could be directly discriminating on grounds of race if it fails to correctly identify Black women because the data set used to train it contained fewer Black women.

Objective justification of AI systems

The difference between direct and indirect discrimination is more than just an interesting legal nuance. It has critical implications for creators and users of AI systems. Direct discrimination generally can't be objectively justified (save in cases of direct age discrimination). By contrast, there is an objective justification defence to indirect discrimination if it can be shown that the deployment of the AI system was a proportionate means of achieving a legitimate aim. The proportionality element involves a balancing exercise to decide whether the discriminatory impact is outweighed by the needs of the user of the system. This is not typically easy to establish, although we can envisage situations where the defence might be argued. An example where this has been run. For example, a stretched police force with limited resources might believe that it has a good argument on proportionality if using an AI system with high predictive accuracy reduces crime rates so much that it helps the force use its scare resources very effectively.

As mentioned, the objective justification defence is not available in most direct discrimination cases. Instead, the creator/user has to show that the reason for the AI system's decision is not because of the individual's protected characteristic. However, without a clear understanding of how an algorithm works, it will be very difficult to show this (discussed further below).

Proxy discrimination

Case law has indicated that it will be direct discrimination if a decision maker uses a criterion which is inherently discriminatory (known as a "proxy") to reach a decision. In this situation the protected characteristic does not have to directly feature in the decision maker's mental process. A classic example from case law is James v Eastleigh Borough Council, where the Council allowed pensioners free entry to a public swimming pool. At the time, the UK state pension age for men was 65 and age 60 for women. Men aged between 60 to 65 had to pay to go swimming, and so were directly discriminated against compared to women of the same age. Making a decision based on whether someone was a pensioner was therefore a proxy for sex and age (although at the time of the James litigation, only sex was a protected characteristic).

It therefore follows that if, for example, a bank's automated mortgage application assessment identifies that a person's post code is correlated with a low likelihood of repayment, and the people living at a particular post code are predominantly of a particular race or ethnic group, there may be a correlation between loan decisions and race if loans from a particular post code are disproportionately rejected.

Complexities of AI discrimination

Algorithms are substantially better than humans at amassing data and analysing it for correlations. An AI system may, for example, learn a proxy for sex, made up of a number of variables that capture a correlation, rather than just one inherently discriminatory criterion.

Although it has yet to be considered by the courts (as far as we know), it is arguable that where an algorithm identifies a set of factors which, when applied together, constitute a perfect proxy for a protected characteristic (even if unforeseeable to any human), this could amount to direct discrimination.

In these instances, an AI system would effectively operate an automated version of unconscious human bias. For example, a system may not object to anything specific in an employment CV, but multiple indicators (which may include proxies for protected characteristics e.g. the types of sport that the person plays) may cumulatively affect its overall impression. Unconscious bias can be direct discrimination when it features in human decision making, and it is worth considering whether the legal position should be any different because an AI system is used.

Burden of proof

The burden of proof in direct discrimination falls first on an employee to show prima facie discrimination. That is, in the absence of any other explanation, a court could decide based on the employee's evidence that the employer committed unlawful discrimination. If the employee does this, the burden then shifts to the employer to show that on a balance of probabilities, discrimination has not occurred.

As an AI system may potentially combine a pool of factors to reach a decision, it could well be difficult for an employer to evidence that the decision was reached on non-discriminatory grounds. Furthermore, the black box issues common to AI may make it near impossible for employers to discharge the burden of proof.

Sufficiently close connection?

With improved traceability it may be possible for an AI system to provide a quantifiable explanation of the role played in a decision by a particular proxy or protected characteristic (potentially more so than in human decision making, for example in cases of unconscious bias).

This raises the question of how close the connection between the protected characteristic and the outcome must be. When training an AI system on real world data, a protected characteristic might have played some upstream role, although how far upstream it must be before it loses its causal relationship to the treatment is unclear.

Conclusion

It is important for both creators and users of AI systems to be aware of the potential for an AI system's decision making to be direct discrimination. Prior to investing in or using an AI system, a creator/user would therefore be well-advised to take appropriate steps to ensure that the AI system has been properly assessed for bias. This may take the form of a governance, empirical or technical audit, ensuring that the AI system complies with the AI Act.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else