Mishcon de Reya page structure
Site header
Main menu
Main content section
Abstract dots over dark blue background

EHRC guidance on discriminatory job adverts highlights risks of using generative AI in recruitment

Posted on 22 November 2024

The Equality and Human Rights Commission recently updated its guidance for those placing or publishing job advertisements, to help employers avoid using discriminatory adverts. The revised guidance is a timely reminder for employers to take care when using generative AI tools to draft job advertisements, as AI-created output may include biases that affect the diversity of job applicants and can result in discrimination.  

The benefits of using generative AI in recruitment 

Generative AI models, such as ChatGPT, are being increasingly used in the recruitment sector to streamline and enhance various aspects of the hiring process, for example, by:  

  • generating job descriptions;  
  • helping to create personalised messages to potential candidates; 
  • creating summaries of candidates’ experience and skills; 
  • producing customised interview questions based on the job description and the candidate's CV; and 
  • creating follow-up emails and other communications to keep candidates engaged throughout the recruitment process. 

Generative AI can perform these tasks in a fraction of the time it would take a human, creating real efficiencies for users, while adopting a consistent tone and style (as instructed by the user). 

Beware the risk of bias in AI systems  

Bias in AI systems can arise in several ways, most commonly from the data used to train the model (including societal and cultural factors embedded within the training data) and the algorithms and techniques employed by the AI. 

For example, if a text query made to an AI model contains biased language or reflects societal stereotypes, the AI model may generate responses that perpetuate or amplify those biases. The resulting output from generative AI is therefore susceptible to bias and misinformation.  

What impact can this have in the recruitment context?  

Several factors can influence who applies for a role based on a job specification, such as:  

  • the job title used;  
  • the skills required; and  
  • the language in the job description, including what is known as "gendered language".  

The impact of gendered language in a recruitment context 

Studies have shown that "gendered language" in job descriptions can influence who applies for roles. Examples of "gendered language" in job descriptions include:  

  • typically female language: 'supporting', 'collaborative', 'nurturing' and 'committed'.  
  • typically male language: 'dominant', 'competitive', 'ambitious'. 

Research analysing the kinds of jobs men and women apply for indicates that the adjectives used in job adverts matter. For example, studies have shown that: 

  • Job advertisements using more "masculine" wording were perceived by both men and women to have fewer women in the occupation than the same advert using more "feminine" wording, regardless of whether the occupations in question were in fact male or female dominated. 
  • Job adverts using more "masculine" wording were perceived by women to be less appealing than the same adverts using more "feminine" wording, regardless of whether they were in fact male or female dominated occupations. 

Using generative AI to draft job specifications  

The ability of ChatGPT (and other publicly available generative AI models) to generate textual output in a conversational style has made them an increasingly popular tool to be used to help draft job specifications, or to provide template text to work from.  

However, for the reasons mentioned above, a ChatGPT generated job specification may adopt social biases, including "gendered language", and may therefore influence who applies for a role.     

What this means for employers  

Employers should review their job adverts and other recruitment literature to check that it is inclusive and does not unlawfully discriminate. This includes using neutral language and not making assumptions based on stereotypes about who might be suitable for a role. 

The recent EHRC guidance highlights that recruitment literature that includes a gendered job title such as sales girl, postman, policeman etc. is likely to be discriminatory under the Equality Act 2010.  

The benefits of having a Generative AI policy  

Employers engaging in recruitment should therefore be wary of the risks created by staff using ChatGPT and similar generative AI tools to help them create recruitment literature.  

We recommend that any organisation that uses (or proposes to allow the use of) generative AI tools in recruitment or more generally should have an appropriate policy in place for staff. This will ensure that the employer obtains the advantages of using generative AI, while reducing the associated risks. 

A good generative AI policy will include guidance on: 

  • how to avoid confidentiality and data breaches; 
  • how to avoid using inaccurate information created by a generative AI system, whether that is potentially discriminatory output or an AI hallucination; 
  • when to highlight that a generative AI tool has been used to create a piece of content; and 
  • responsible and ethical use of a generative AI tools more generally. 

We also recommend that recruiters and related organisations read this recent article on guidance from the Information Commissioner on using AI tools in recruitment.  

If you would like to discuss recruitment issues, the use of generative AI in the workplace, or implementing a generative AI policy, please contact Daniel Gray or another member of our Employment team

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else