Mishcon de Reya page structure
Site header
Main menu
Main content section
Blue data abstract

AI in the Workplace: data protection issues

Posted on 13 March 2024

While the use of AI systems in recruitment and during employment continues to grow, it is essential for both creators and for employers, as users of AI systems, to carefully consider what types of data they will be handling and collecting, and the key data protection requirements.  

This article introduces the data protection framework that surrounds the use of AI platforms. We summarise the key legal considerations that employers should be aware of when using AI technologies in recruitment and employment.

Controller vs Processor?

When an organisation decides to process personal data for any activity, the first thing it should consider is whether it is a controller or processor of that data. If the organisation decides the purpose and means of processing data (i.e. what personal data is processed, and why – for example an employer obtaining employee or candidate data), then it is likely to be a controller.

If the organisation provides a service to a third party and the third party decides what data is to be processed and why, or the organisation is processing the personal data on the third party's instructions – for example a company that handles payroll administration for another employer - the organisation is likely to be a processor.

This role-based classification is important as the obligations on an employer depend on whether it is a controller or a processor.

As a controller, the employer is required under the UK GDPR to provide specific information (usually in the form of what is termed a "privacy notice") to employees and job candidates etc. This privacy notice should set out key information about the processing activity, such as how the personal data is going to be used, the lawful basis relied on by the employer, and the rights of employees in this regard. For example, if an employer uses an AI system to select candidates in a recruitment process, the employer should set this out in its privacy notice together with the associated lawful basis.    

Another key requirement for controllers considering the use of AI systems is to assess the risks associated with the use of an AI system before engaging in the activity - this is likely to constitute a DPIA as outlined below.  

Data Protection Impact Assessment (DPIA)

When an organisation decides to carry out a "high-risk" processing activity using personal data, it is required to assess the risks associated with the activity by carrying out a DPIA. In an employment context potential "high-risk" activities using personal data include using AI platforms in recruitment, making employment decisions on task allocation, promotion and termination, and monitoring or evaluating employees.

Using of AI platforms for activities such as candidate selection or to review employee performance are likely to be  "high-risk" activities because they involve what the Information Commissioner has classed as "innovative technology" and, furthermore, these processes can have significant impact on candidates and employees.

Some common risks associated with the use of AI based technologies include:

  • inherent inbuilt bias in the AI platform;
  • lack of transparency;
  • unfair decision making; and
  • accessing personal data without the knowledge or consent of individuals (also known as data scraping).

Consequently, when using AI-based technologies, employers should be aware of their data protection obligations. For instance, in addition to providing the usual information necessary to comply with the UK GDPR, transparency requires employers to inform employees when they are using AI systems to handle their personal data.  

Where use of AI involves automated decision-making about individuals, they also have the right under the UK GDPR to receive meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing. As a responsible operator of an AI system, an employer must be able to explain to its staff how its system works and how it reaches the decisions it does, in a way that a typical member of the public can understand.

To the extent that employers may be operating in the EU, or otherwise affected by extra-territorial provisions, these overarching principles are also echoed in the EU's incoming AI Act.

Data Subject Access Requests (DSARs)

Explainablity requirements are particularly important because employees, candidates and other individuals have the right under the UK GDPR to make a DSAR. This is a formal request made by an individual to an organisation, seeking information about and access to the personal data that the organisation holds about them. This helps individuals be aware of and verify the lawfulness of the processing of their personal data.

It is therefore important for creators of AI systems to consider how to develop the AI system to comply with the DSAR right, and for employers as users of an AI system to consider how well the system can respond to these requests.

Practical Measures

More broadly, creators and employers using AI systems should ensure the following practical measures are implemented where appropriate. This will help ensure compliance with the data protection framework applicable to the use of AI platforms, and also help manage potential risks.

  • Be clear and up front with employees about how and why you are using data (in your privacy notices and relevant policies).
  • If the employer is scraping data to train its AI model (such as extracting information from a website), it will need to complete a DPIA (and there may be legal implications beyond just data protection law).
  • Be prepared to explain how your AI model works. You should consider this a mandatory requirement if you use an AI system in a recruitment or employment context.
  • Build the AI model so that a human is involved in the decision-making process.  
  • If relevant, expect questions from investors, and others, around where you acquired your data, and be able to confirm that data was collected lawfully.

You can achieve this by using factsheets (a collection of information about how an AI model was developed and deployed), DPIAs (describing the capabilities and limitations of the system) and conformity assessments i.e. a demonstration that the AI system meets legal and regulatory requirements (insert link).

AI v Data Protection Compliance

The use of AI is increasingly coming under the regulatory spotlight, and in the UK the Information Commissioner's Office has launched the first of a series of consultations on generative AI, "examining how aspects of data protection law should apply to the development and use of the technology". It will be essential for employers to keep up to date not just with technological and legal developments in this area, but also with developments in regulatory approach and risk.

The effective use of AI in the employment context requires a comprehensive understanding of data protection laws. As AI continues to evolve, staying on top of employers' legal obligations in relation to AI is crucial for both creators and for employers using AI systems. This helps not only with regulatory compliance but also fosters trust and transparency in AI technologies.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else