Mishcon de Reya page structure
Site header
Menu
Main content section
Abstract technology

AI in recruitment: Navigating the challenges of inferred special category data

Posted on 17 June 2025

In order to help speed up processes and find the best talent, employers are increasingly turning to Artificial Intelligence (AI) to assist with recruitment. However, concerns could arise under data protection law, if the use of some features of these tools lead to discrimination based on the inference of special category data, such as race or religion. Such categories of personal data will often also constitute protected characteristics for the purposes of the Equality Act 2010. The Information Commissioner's Office (ICO), which regulates data protection law, recently found, in audits of AI providers, that recruitment tools may infer these protected characteristics and without a lawful basis or without the candidate's knowledge. Where this happens, there could potentially be infringements of the UK GDPR, and unlawful discrimination. 

It is quite understandable, indeed, generally to be commended, that recruiters will seek to collect information on gender, race or religion, among other special category data, to promote diversity and inclusion within their organisations. However, by failing to account for the risks or considering the appropriate legal basis involved with the inference of special category data, both the AI provider and the recruiter risk breaching both the UK GDPR and equality law, while also facing reputational and financial consequences. 

Issues with the inference of special category data 

Appropriate legal basis and condition

Surnames, geolocation information or other types of personal data collected may lead to AI tools estimating or inferring an individual's race or religion. For instance, surnames may indicate ethnic backgrounds or knowing where an individual grew up and went to school may lead to inferences of race or culture. Inference of this kind, whether intentional or not, is likely to be held to be processing of special category data. If it is, then the UK GDPR will require that a condition be identified under Article 9 in order to process this data lawfully. This will be in addition to the need to identify a legal basis under Article 6. A failure to undertake these exercises may render the processing of such data unlawful, and may constitute an infringement of the UK GDPR. 

Transparency

Recruiters may contravene the UK GDPR principle of transparency if they do not accurately and plainly inform candidates that their personal data may be processed by AI and that their special category data may be inferred as a result. Article 13 of the UK GDPR lays down the core requirements for transparent processing, and recruiting organisations will normally use a "candidate privacy notice" to convey the required information. 

Accuracy of data

AI tools may produce inaccurate or incomplete results due to biases or deficiencies in the training data, or flawed algorithms. The UK GDPR requires personal data to be accurate. Such inaccuracies could lead to unfair or discriminatory treatment of candidates, undermining the diversity goals that recruiters aim to achieve and potentially giving rise to claims from unsuccessful candidates. 

System bias

AI tools can perpetuate and amplify existing biases present in the data they are trained on. This can result in unintended discriminatory practices, where certain groups are unfairly disadvantaged in the recruitment process. AI providers need to be aware of the data sets that they develop their tools on, to ensure that they do not contravene the UK GDPR principle of fairness. 

Achieving compliance in diversity goals 

To address these challenges, recruiters should adopt compliant strategies to meet their diversity and inclusion objectives: 

  • Review risk: By conducting Data Protection Impact Assessments (DPIAs), organisations can proactively address data protection concerns, ensuring that AI recruitment tools are used responsibly and in compliance with data protection laws. 
  • Establish clear lawful basis: To ensure that special category data (as well as other personal data) is processed lawfully, recruiters must ensure they have a clear and documented legal basis for processing special category data under both Article 6 and Article 9 of the UK GDPR. Article 9(1)(g), and schedule 1 of the Data Protection Act 2018 may, for instance, provide a condition where processing is for the purposes of keeping under review the existence or absence of equality of opportunity or treatment between groups of people. 
  • Improve accuracy: Use diverse and representative training data, regularly audit AI models for bias, and implement robust validation processes. This ensures AI tools are accurate and fair, leading to more reliable recruitment decisions. 
  • Ensure transparency: Provide candidates with clear and accessible information, through a comprehensive privacy notice shared during the recruitment process, about how their personal data is used, including any potential AI inferences. This empowers candidates to make informed decisions and exercise their data protection rights. 

Conclusion 

The use of AI in recruitment offers recruiters the ability to enhance diversity and inclusion within their organisations. However, it also poses risks related to the inference of special category data, such as race or religion, which can lead to breaches of the UK GDPR and the Equality Act 2010.  

To mitigate these risks, recruiters and AI providers must ensure compliance with data protection laws by establishing a lawful basis for processing such data, maintaining transparency with candidates, and addressing potential biases in AI systems. 

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else