• Home
  • Latest
  • Digital disruption and discrimination in the workplace

Digital disruption and discrimination in the workplace

Posted on 19 March 2019 by Emily Knight

Digital disruption and discrimination in the workplace

The arrival of the Fourth Industrial Revolution, or its appropriate digital moniker 'Industry 4.0', is set to transform the workplace as we know it over the coming decade. The rise of AI, automation and robots is already disrupting how companies recruit, promote and manage their staff.

It is reasonable to assume that in this new digital world of driverless cars, facial recognition and automated decision-making, employers will be less exposed to liability, because technology removes the subjective and often irrational aspects of human decision making in HR processes.

It is now well established that, even when human beings do not act in a way which is advertently discriminatory, our behaviour and decision-making is rife with unconscious bias. This is particularly manifest in recruitment, where selection decisions are prone to being arbitrary and subjective: For example, a scientific study reported in the Guardian, found that an interviewer holding a hot drink was more likely to judge an interviewee more favourably, than if they were holding a cold drink.

Human Resources to 'Algorithmic Resources'

Against this backdrop, it is logical to assume that when you replace human decision-making with algorithms and AI in the workplace, the risks of unconscious bias and discriminatory treatment and the associated liabilities become obsolete. However, biases can be embedded in the data sets and algorithms themselves, giving rise to discrimination allegations, with the exposure to liability being arguably more significant than discrimination perpetuated by human decision making. While a discrimination claim arising from human behaviour is likely to be restricted to particular individuals, if an algorithm deployed by an employer was found to be discriminatory, the impact is likely to be wide-reaching, leading to multiple claims. The ready availability of the datasets and algorithms used by an employer also creates transparency around the decision-making process, which means that claims can be easily evidenced.

This has already been shown in recruitment software, which deploys predictive models to match talent to job specifications. It was recently reported that Amazon inadvertently created a virtual hiring tool with gender bias. The tool was designed to sift through thousands of job applications to select appropriate candidates for interview, however the algorithm taught itself to discriminate against female applicants, based simply on the fact that, historically, more men had applied for and secured jobs at Amazon.

If an employer is recruiting and/or managing performance through algorithms, it is also likely to be based on specific measures of competence and qualifications, thereby overlooking less quantifiable measures of talent, such as untapped potential or the elusive 'x-factor' quality. Unlike a human manager or interviewer, a computer also lacks emotional intelligence – it is unlikely to be programmed to integrate personal aspects of an individual's circumstances and personality when making decisions, which may lead to arbitrary, unsatisfactory or unfair results.

However, this may not be the case for long, as AI is being developed to recognise and mimic human emotions. Many businesses and particularly start-ups are already using facial-recognition software to screen potential candidates, on the basis of body language, gestures and emotional cues. Again, this presents significant scope for unintended results if not rigorously checked and audited.

Given that the use of AI and algorithms for recruiting and managing staff is already commonplace, it is likely to be a matter of time before it is also used for dismissing staff. For example, it is easy to envisage an employer using an algorithmic tool, rather than line managers, to determine which employees to put at risk and ultimately dismiss in a redundancy exercise.

Again, this could lead to increased efficiencies and streamlined processes for employers, but would need to be used with caution and with appropriate checks and human oversight, to mitigate any potential adverse effects. Notwithstanding the inherent legal risks in devolving such decisions to a computer, an employer would need to consider the cultural impact of doing so, from an employee relations perspective.

Deploying algorithms and AI in the workplace

As with any HR process, employers need to navigate and mitigate against the potential legal and reputational ramifications of using algorithms and AI to manage staff. Key points to note are:

  • Any algorithms should be rigorously tested and audited to ensure that the datasets used are representative of diverse populations, to mitigate against discrimination risks.
  • If algorithmic processes are being outsourced to a technology provider (which is often the case), it is critical to ensure that such providers are conducting appropriate due diligence, to ensure the underlying algorithms are not discriminatory or biased. This could include requesting appropriate indemnities from the provider as part of the services agreement, so that the employer is protected from liability in the event that the processes lead to discrimination allegations.
  • Algorithmic tools must also comply with data protection legislation. At a minimum, this requires that any automated decision-making is explainable and gives employees the right to object, where there is no human oversight.
  • Whilst it is important to ensure that any algorithmic processes are legally sound, they should also reflect the business' needs and cultural values, both from an employee relations and brand reputation perspective.

There is no doubt that employers should be harnessing the power of technology and embracing digital disruption to streamline their HR processes, stay competitive and ensure growth. However, using AI and automated decision-making in the workplace is not without legal and reputational risk and needs to be managed carefully. Klaus Schwab, Founder and Executive Chairman of the World Economic Forum once said of the Fourth Industrial Revolution that "there has never been a time of greater promise or greater peril". That would appear to be especially true of the workplace.

How can we help you?

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

Emergency number:

I'm a client

Please enter your first name
Please enter your last name
Please enter your enquiry
Please enter a value

I'm looking for advice

Please enter your first name
Please enter your last name
Please enter your enquiry
Please select a department
Please select a contact method

Something else

Please enter your first name
Please enter your last name
Please enter your enquiry
Please select your contact method of choice