Mishcon de Reya page structure
Site header
Main menu
Main content section

Now & Next: When computers are racist – in partnership with The Economist

Posted on 11 February 2022

Many of us assume that new technology has no racial bias. But, without effective regulation and more concerted self-policing, the scourge of racism risks blighting our digital future. As technology races further ahead, are we in danger of automating racism – and if so, what can we do to fix it?

You might think technology is the great leveller and that ones and zeros don’t have racial bias.

Google’s photos app apparently automatically labelled a black couple as gorillas.

You’d be wrong. 

Parman Jang
It’s common knowledge that this algorithm is racially bias.

As technology races further and faster ahead, is the world in danger of automating racism?

Meredith Broussard
Research Director
NYU Alliance for Public Interest Technology
All kinds of structural inequality are reflected in our AI systems.

And what can be done to fix it?

NOW&NEXT
When computers are racist

Banbury UK

Parman Jang
Sunday I did clean the car, the sanitisation of stuff you know is even more important.

For Parman Jang keeping his car clean is vital.  Working as an Uber driver supported his family and relatives back home in the Gambia until he lost his job.  Par believes he is a victim of racial bias built into Uber’s software.  Last April he got a message from Uber.

Parman Jang
After my shift finished one night, I came home, I was looking through my emails and I saw one that says like, your permit has been permanently deactivated.  I thought it was just like a computer mishap.

Following public concerns about customer safety, Uber introduced a new system in 2020 to verify driver’s identities.  Periodically drivers are asked to upload selfies which are checked against photos supplied when they first joined Uber.  Despite providing numerous selfies, Par says Uber’s algorithm failed to match them to his profile.

Parman Jang
I didn’t think much of the whole thing because I thought if a human being reviews it it’ll be clear and I’ll be back on line next morning.

But Par says his account remained deactivated.  He later discovered he was far from the only ethnic minority driver to have experienced this indignity.

Parman Jang
I knew from the stories that I come across on line that some other black people have got the same experiences with Uber and algorithm.

Uber employs at least 70,000 drivers in the UK, 52% of whom are from an ethnic minority.  Par and a handful of other driver are taking Uber to Court for unfair dismissal.  They allege Uber’s algorithm discriminated against them because of the colour of their skin.  The company denies these claims and says it uses a robust system of human review to ensure decisions about livelihoods are not made without oversight.

Many technologies have a history of embedded racism.  In the 1970s Kodak colour film was unable to capture darker skin tones accurately.  The company only fixed this after chocolate makers complained its photos weren’t doing their products justice.

The racist history behind facial recognition techno racism.

In recent times Microsoft, Facebook and Google have all had high profile problems with their technology.

Just recently Facebook’s AI got it terribly wrong when they put the label ‘primates’ on a video of back man.

So what is going wrong?  The simple answer is technologies are trained using publicly available data sets which do not include sufficient data from ethnic minorities.

Meredith Broussard
Research Director
NYU Alliance for Public Interest Technology
The problems of the world include all kinds of structural inequality and those problems are reflected in the data that we are using to train our AI systems.

It’s commonly believed that technology is neutral and free of bias.

Meredith Broussard
Research Director
NYU Alliance for Public Interest Technology
So one of the things that I write about is an idea that I call techno chauvinism.  It is the idea that technology is superior, that technological solutions are superior.

As the world puts greater faith in technology, embedded biases are affecting black people in all aspects of their lives.  During the Covid Pandemic pulse oximeters have been essential in measuring blood oxygen levels but then researchers discovered that they gave flawed readings for black patients.

Tamara Gilkes Borr
US Policy Correspondent
The Economist
Darker skinned patients were turned away and sent home to self-monitor because medical practitioners thought that they were doing well.  So this is one example of how racial bias can really be a matter of life or death.

Meredith Broussard
Research Director
NYU Alliance for Public Interest Technology
I don’t actually think that the creators intended to be racist.  I think that they were probably a group of light skinned developers who tested it on themselves and said, ‘oh it works for us, it must work for everybody’.

And if you want to buy a house software may also discriminate on the basis of your skin colour.

Brooklyn native, Rochelle Faroul moved to Philadelphia in 2015 hoping to buy a home here.

Research in the US found that older credit scoring algorithms used by some mortgage lenders favoured particular financial behaviours that are more common among white people.  Black applicants for home loans were 80% more likely to be rejected than white applicants from similar backgrounds. 

So what can be done to fix the problem of racial bias with computers?  Data journalist, Meredith Broussard and her colleague, Thomas Adams are working on a solution.

Meredith Broussard
Research Director
NYU Alliance for Public Interest Technology
Even if people don’t think that race data is being used the computer may actually be using race data because AIs are making all these decisions..

They are fighting fire with fire.  By designing a unique set of software tools to identify bias embedded within technologies. 

Thomas Adams
Chief Operating Officer
O’Neil Risk Consulting and Algorithmic Auditing
Once you set the standard then you can refine the standard but until you do that you don’t really know what’s going on.

Meredith Broussard
Research Director
NYU Alliance for Public Interest Technology
A partner at O’Neil Risk Consulting in order to come up with what we call a regulatory sandbox which is a software system that companies are going to be able to use to test out their algorithms for bias in order to confirm that companies are not releasing bias algorithms into the world

However observers worry that tech companies cannot be trusted to police themselves.

Tamara Gilkes Borr
US Policy Correspondent
The Economist
For many companies reducing racial bias will align with their bottom line.  However, in the instances when reducing racial bias does not align with the bottom line it might be necessary to regulate those companies.

For many there is only one way to ensure that systemic racism isn’t built in to the digital future.  Much tighter regulation.

Rashida Richardson
Assistant Professor of Law and Political Science
Northeastern University
We are at an interesting inflection point right now.  Some of the changes that would need to happen to combat racial bias include Government regulatory agencies to have more resources to take action.

But so far only the EU has started to get serious about policing tech companies, big and small.

A white paper on artificial intelligence.

It’s proposing a ground breaking principle.  The higher the risk posed by an AI system to fundamental rights, the stricter the oversight rules.

Rashida Richardson
Assistant Professor of Law and Political Science
Northeastern University
I do think the EU’s draft regulations are an important step.  There still are some concerns because it assumes that we have a shared understanding of what risk is and how to make those calculations.

Without effective regulation and more concerted self-policing the old scourge of racism risks blighting the new digital future.

Tamara Gilkes Borr
US Policy Correspondent
The Economist
I am Tamara Gilkes Borr, US Policy correspondent at The Economist.  If you would like to learn more about AI tech bias you can read my piece by clicking on the link and if you would like to watch more of the Now & Next series, you can click on the other link.  Thank you for watching and please do not forget to subscribe. 

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else