Mishcon de Reya page structure
Site header
Main menu
Main content section

Immigration and artificial intelligence: a 'digital hostile environment'?

Posted on 22 June 2020

The Joint Council for the Welfare of Immigrants (JCWI) has been granted the right to pursue a judicial review of the Home Office’s artificial intelligence system that it uses to filter and prioritise UK visa applications.  JCWI seek to 'turn off' the algorithm to prevent its use.

The JCWI claim that the algorithm used by the Home Office makes decisions based on race, a protected characteristic under section 4 of the Equality Act 2010 (the EqA).  JCWI's submission to the High Court further alleges that the operation of the Home Office's algorithm creates a "hostile environment" constituting harassment within the meaning of section 26(1) of the EqA.  JCWI assert that the algorithm creates three distinct channels for applicants including a "fast lane" that, they allege, they allege, enables "speedy boarding for white people" to enter the country.

This fascinating case sees the neutrality and legal implications of an artificial intelligence system tested in the courts in a novel way.  As artificial intelligence implementations become more prevalent, it is vitally important that careful consideration is given to matters of legal compliance and digital ethics.   The EU in particular has been actively considering such matters – we most recently reported on the European Commission's announcement of the need for AI law here.

Algorithms must be examined carefully to determine whether they might have a discriminatory effect.  Discrimination by an algorithm can be intentional, can be caused by the algorithm's testing and training data set, or simply reflect the unconscious bias of the programming team that developed the algorithm. 

Developing guidelines from public sector bodies such as the UK's Centre for Data Ethics and Innovation and the European Commission advise that active steps be taken by providers and operators of AI tools to ensure the quality of the underlying data, amongst other things. One of JCWI's arguments is that the AI streaming tool is too opaque and secretive, and so  if and to what extent the Home Office is able to explain the algorithm's decisions is of interest to those monitoring this area; the lack of transparency and explicability of decisions being one of AI's commonly cited disadvantages.  If JCWI are able to demonstrate that a disproportionate number of rejections or "slow lane" designations were made by the Home Office in respect of applicants of certain ethnicities, without providing a legally sound justification, the Home Office may be found to be in breach of the EqA.

In the JCWI case, it is interesting to note that it does not appear as though the claimants have elected to pursue a claim under data protection laws.  We see the potential for a claim under such laws, on the grounds that unlawful and unfair processing infringes the first data protection principle at Article 5(1)(a) GDPR.  We await to see whether such claims are pursued as this judicial review develops.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else