Mishcon de Reya page structure
Site header
Main menu
Main content section

Does statistical bias equal discrimination - AI in the workplace

Posted on 16 January 2023

In our series of articles about the potential legal issues arising from the use of AI we have identified the risk of inherent bias and the steps that will be needed to reduce the risk when AI operates in practice. However, the question of at what point statistical bias amounts to unlawful discrimination is more complex.

Statistical analysis is often relied upon as evidence of indirect discrimination in employment.  "Indirect discrimination" is defined under the Equality Act 2010 (EqA) as when an employer applies a provision, criterion or working practice (PCP) which applies to everyone in the same way, but has a worse effect on people who share a protected characteristic (sex, race disability, age, sexual orientation etc.) than others, putting them at a particular disadvantage.

Statistical bias

Statistical bias can be used to help identify a possible or actual correlation between a protected characteristic and a contested PCP across a group or an entire population (examples might include claims relating to unequal pay based on sex or redundancy based on age).

However, although AI systems may contain bias if their results show statistical disparity between specific groups, statistical disparity may not always amount to discrimination.

In this article I will explore the reasons for this and the importance of reviewing any statistics being produced with a critical eye, to assess the real-world risk of discrimination occurring.

Does bias equal discrimination?

While AI may often be described as biased, statistical bias does not always mean unlawful discrimination has or will occur. Courts are often faced with statistics provided as evidence in support or defense of allegations of indirect discrimination, with recent examples including:

  • Was a policy requirement that 77.4% of men but only 68.9% of women were able to meet discriminatory? (yes)
  • Was a policy discriminatory where 80% of the group of individuals adversely affected by the relevant policy shared the same protected characteristic? (yes)
  • Was a policy discriminatory where 60% of the group of individuals adversely affected by the relevant policy shared the same protected characteristic? (no)
  • Was a policy discriminatory where 87% of women were negatively affected by the policy, but 72% of the workforce were female (no).

The reality is that establishing where bias ends and discrimination begins is not clear cut.  As can be seen from the differing approaches in the examples above, the courts have largely failed to establish clear thresholds for where a biased result indicating disparity between the group of individuals adversely affected by the relevant policy and the wider comparator populace (discussed below), would amount to less favourable treatment for the purpose of the EqA.

It is common for judges to use imprecise phrases to indicate an unlawful level of disparity such as “significantly higher proportion of non-nationals, compared to nationals, are affected by that rule,” “almost exclusively women”, “significantly greater proportion of individuals of one sex as compared with individuals of the other sex,” “considerably more”, “far more” etc. As these phrases suggest, specific percentages or thresholds for unlawful levels of discrimination are rarely used.

This means that the determination of whether biased outcomes from AI systems amount to discrimination will be based on the facts of each case, and will rely on subjective judgement and context rather than clear boundaries or hard percentages, based on statistical data. Whether the test for indirect discrimination has been satisfied will require an assessment of case law to establish whether the nature, severity and significance of the disadvantage experienced by the group sharing a protected characteristic (and the particular claimant(s)) is sufficient to amount to discrimination. So although statistical evidence is a useful tool in assessing whether or not discrimination has occurred, it may not always amount to conclusive evidence.

Composition of data sets

Furthermore, in establishing whether a PCP places persons sharing a protected characteristic at a particular disadvantage, a claimant is required to show that they were placed at a particular disadvantage when compared with others. The starting point is to look at the impact on people within a defined "pool for comparison".

In the context of employment, the pool in a particular case may consist of employees at a single workplace, the population within the local catchment area of a workplace, or even the whole economically active population of the UK. Context is everything.

The pool will depend on the nature of the PCP being examined. If the Claimant is challenging a recruitment criterion, for example, the pool will usually comprise those people who would be eligible for the job but for the criterion in question. On the other hand, if an employee is challenging a PCP applied throughout the employer's organisation, then the pool will usually be the whole workforce.

Once the comparison groups have been identified, the law does not clearly state how precisely to measure inequality between groups.  For example when comparing how many disabled employees were awarded promotions vs non-disabled employees, should the percentage of unsuccessful disabled employees be compared against the number of unsuccessful non-disabled employees, or the number of successful disabled employees be compared against the number of successful non-disabled employees, or successful v unsuccessful, for example. The courts have often been inconsistent with their approach in the absence of clear legislative guidance.

Typically, parties adopt the analytical technique that best supports their case. The results are then often difficult to understand leading to cases of courts erroneously interpreting statistics. In addition, the sophisticated statistical techniques which are arguably required by courts to correctly interpret data in judicial proceedings often lie beyond the resources available to the courts and parties themselves. This makes it difficult to rely solely on statistical evidence when assessing the merits of a claim for indirect discrimination.

Refuting claims of discrimination

It is also worth noting that respondents to claims of indirect discrimination under the EqA are able to refute liability for discrimination by either:

  • showing that there was no causal link between the PCP and the alleged disadvantage; or
  • by acknowledging that differential results have occurred but are justified as the PCP was a proportionate means of achieving a legitimate aim.

As such, any statistical evidence of bias should always be reviewed with the above in mind, as it may not always tell the whole story.

Moving forward

Though statistical evidence remains an important tool for detecting, mitigating and adjudicating on claims for discrimination, statistics must be contextualised in order to assess fairly their actual relevance. If creators or users of AI systems become concerned that the system they are using is displaying bias, this should certainly be investigated as it will often be an unintended consequence of adopting technology in the workplace. However, statistical bias does not always mean (and should not be conflated with) discrimination.

If you have a specific query about the use of AI in HR or any other employment concerns, please talk to our Employment and HR team.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else