Mishcon de Reya page structure
Site header
Main menu
Main content section
white and green light in a dark room

AI in the workplace: Are AI tools discriminating on your behalf?

Posted on 9 June 2022

AI tools are now a common feature of recruitment processes and can be a valuable addition to any HR department when placing job advertisements, reviewing job applications or carrying out interviews, for example. However, by using AI assisted recruitment tools there is a risk of unwittingly discriminating against applicants.

The perfect match or inadvertent bias?

There are several distinct stages of the recruitment process for any employer. First, the choice of where and how to advertise the posting followed by screening CVs to create a shortlist for interview. Then the interview process itself, possibly accompanied by additional testing or assessments; and finally selecting and onboarding the successful candidate. 

Organisations have become increasingly reliant on AI-assisted recruitment tools, which can automate these high volume, time-intensive processes through: 

  • targeted job adverts online
  • performing sifts of CVs and application forms
  • searching prospective employees' social media for key phrases or terms
  • analysing tone of voice or facial movements during interviews
  • performing automatic filtering of candidates through online assessments and tests.

These tools can effectively identify an applicant's job history, personal characteristics, and cognitive ability, helping to make an initial assessment of most of the applicant pool more efficiently and predict any given candidate's likely performance in a role. It means personally interviewing fewer, more appropriate candidates, which can save time and expense.

Introducing bias

Unfortunately, if data used to train an algorithmic tool is influenced by prior human decisions that were biased, and that bias is passed on to the AI recruitment model, the likely result is systematic bias. For example, if recruiters systematically overlook applications from individuals with disabilities, the AI recruitment tool will likely reinforce and replicate these discriminatory practices, exacerbating a problem it’s seeking to avoid by treating certain groups less favourably than others.  

Similarly, more advanced AI tools are often tested specifically against successful existing employees in a workplace or sector, to create large sets of training data. Prospective candidates are assessed against these data sets to identify whether they match such successful traits, to provide feedback on the applicant's suitability for the role, meaning that any systematic bias may permeate through to such feedback.  

In 2018, Amazon was forced to scrap its own automated CV screening algorithm which was trained using its recruitment data from the previous ten years. Using the data, the algorithm taught itself that male candidates were preferable to female candidates based on Amazon's previous recruitment decisions. It therefore reportedly penalised CVs that included the word "women's" and downgraded graduates of certain all-women colleges.

Similarly, voice-testing technology may discriminate against employees based on their accent, reflecting assumptions about the candidate's nationality or race. 

AI-based recruitment tools are only as good as, or unbiased as, the data they are fed. If the training data is flawed, they risk screening out candidates who do not fit the mould of existing successful employees, which may result in discriminatory hiring practices.

The risks to employers 

Indirect discrimination occurs where a "provision, criterion or practice" or "PCP" (such as a workplace policy) that’s applied to everyone equally, disadvantages a certain group of people who share the same "protected characteristic".  

UK equality legislation recognises nine protected characteristics: race (which includes colour, nationality and ethnic or national origin), sex, disability, age, pregnancy or maternity, sexual orientation, religion or belief, gender reassignment and marriage and civil partnership. Job or promotion candidates could potentially be filtered out inadvertently by AI tools because of protected characteristics, unless there is very careful scrutiny of the data sets used to train them. 

Using an algorithm to recruit employees could amount to a workplace policy (and therefore a resulting PCP) that causes indirect discrimination. That’s why AI tools should be implemented with thought and care. AI needs to be tested before rollout and routinely afterwards, to minimise the risk that it may be introducing bias or disadvantage into the process. 

Feedback on how the AI tool works should be easily accessible for all candidates – transparency is key to generating trust in the process. Organisations should also evaluate the use of AI by asking candidates and hiring managers for feedback and responding accordingly. They should also monitor the impact AI has on the diversity of candidates.

Could you be caught by the new AI regulation? 

On 14 June 2023, the European Parliament adopted the latest draft of the AI Act, with a final text expected to be agreed over the next year. 

The AI Act is a European law on AI, and will apply to:

  • organisations providing or using AI systems in the EU; and
  • providers and users of AI systems located in countries outside of the EU (including the UK), if their AI systems affect individuals in the EU e.g. their product is used in the EU.

It will be the first law on AI by a major regulator anywhere in the world. The law assigns applications of AI to three risk categories:

  1. Unacceptable risk: AI that falls into this category will be banned, and covers systems considered a clear threat to the safety, livelihoods, and rights of people. It includes systems that manipulate human behaviour or allow ‘social scoring’ by governments.
  2. High risk applications: This category of AI relates to technologies which affect material aspects of people's lives, through assessments, evaluations, access, employment, etc. 
  3. Low risk: This covers AI technology which is not classified as unacceptable or high risk and is largely left unregulated.   

The AI-assisted employment tools we’ve covered in this article (e.g. CV scanning, voice recognition tools, etc.) are likely to be classed as high risk applications. 

High risk AI systems will have to meet strict requirements before they can be put on the market in the EU. The AI act outlines detailed requirements to allow high risk AI systems to operate. Examples of the requirements include, but are not limited to, continuous risk assessment and mitigation systems, use of high-quality datasets to minimise risks and discriminatory outcomes, logging activity to ensure traceability of results, and providing clear and adequate information to the user and appropriate human oversight. 

Find out more about the EU’s draft AI Act here. For specific queries about the use of AI in recruiting or any other employment concerns, please talk to our Employment and HR team

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else