Mishcon de Reya page structure
Site header
Main menu
Main content section

LGBT+ History Month: Navigating AI risks and legal frameworks

Posted on 29 February 2024

The rapid proliferation of AI systems carries a host of risks, especially where AI systems are vested with decision-making power, which can affect marginalised communities, such as LGBTQIA+ individuals. This article explores algorithmic fairness, the unique risks AI poses to LGBTQIA+ communities, and the UK and EU legal frameworks that may come to govern these issues.

The LGBTQIA+ community and algorithmic fairness

Algorithmic fairness aims to address and rectify biases often embedded in machine learning systems. These biases can lead to discrimination in automated decision-making processes. For instance, AI systems might perpetuate existing prejudices, such as stereotypes about LGBTQIA+ people, or exhibit statistical biases, such as favouring cisgender candidates over transgender, non-binary, or intersex individuals due to the overrepresentation of cisgender records in historical data.

To promote algorithmic fairness, it is essential to understand how biases manifest, define what constitutes a fair outcome, and adapt AI systems to ensure they produce equitable results. This pursuit is intrinsically linked to the concept of explainability, which involves comprehending how AI systems process data and the logic behind their conclusions.

The LGBTQIA+ community encounters distinct obstacles in the realm of algorithmic fairness. LGBTQIA+ identities have historically been underrepresented or altogether omitted from datasets for various reasons:

  • Data on sexual and gender identity is often classified as sensitive.
  • The diversity and fluidity of LGBTQIA+ identities makes them difficult to categorise.
  • LGBTQIA+ identities are inherently 'unobserved' characteristics; they are difficult to infer and quantify.
  • LGBTQIA+ individuals may be reluctant to disclose their identities out of fear or personal preference.

The dearth of representative data hinders the accurate measurement of biases and their impact on the LGBTQIA+ community. Moreover, collecting such data raises ethical concerns, as it could potentially expose individuals to risks if mishandled. This highlights the tension AI developers face between pursuing data-driven transparency and preserving the privacy of LGBTQIA+ individuals.

AI and LGBTQIA+ specific risks

LGBTQIA+ individuals encounter various risks in connection with AI, including:

Discrimination and bias

AI systems in employment screening and credit assessments may inadvertently discriminate against queer applicants due to biased training data. Transgender, non-binary, and intersex individuals may also encounter discrimination in everyday interactions with automated systems. In the UK, the number of people holding a gender recognition certificate is relatively low compared to the total number of those whose gender identity differs from the sex they were assigned at birth. This discrepancy between the lived experiences of transgender, non-binary and intersex individuals and their recorded data creates a risk of discrimination when interacting with automated tools in the financial, employment, and other sectors.
The biased decision-making is exacerbated by the 'black box' or 'explainability' problem, whereby it is difficult to ascertain how an AI system arrives at a specific outcome.

Privacy concerns

The digital footprints of LGBTQIA+ people can be exploited for surveillance and profiling, often carried out for targeted advertising. This raises the risk of sensitive information being leaked or misused, which can have severe real-life consequences, including outing, harassment, or violence. AI technology may infer sexual orientation, gender identity, and even personal information like HIV status from online behaviour and other sources, resulting in the infringement of privacy.

Online safety

AI systems trained on internet data may propagate offensive and anti-LGBTQIA+ language. Additionally, AI-driven online harassment and outing campaigns can target LGBTQIA+ individuals at a larger scale than previously. Stonewall reported that a considerable percentage of LGBTQIA+ individuals facing online abuse have reduced their online presence, altered their accounts to avoid further abuse, or left social media platforms altogether.

Content moderation

Automatic moderation tools can erroneously remove LGBTQIA+ content by classifying related language as sensitive content or as 'toxic' speech, silencing LGBTQIA+ voices online and restricting their right to freedom of expression.

Regulation

To date, the EU has adopted a top-down approach to regulating services that provide or deploy AI, with the text of the proposed AI Act agreed in December 2023. Meanwhile, the UK Government has expressed preference for a pro-innovation and sector-specific approach in its recent response to the 2023 AI White Paper consultation, eschewing imposing an overarching legislative framework.

The risks to the LGBTQIA+ community outlined above will therefore hopefully be addressed by a combination of compliance with EU regulations and voluntary proactive measures taken by AI developers.

EU regulation

The EU AI Act takes a risk-based approach to AI regulation, designating some AI systems as 'prohibited' or 'high-risk' based on their intended functions and uses.

Prohibited AI systems

Prohibited AI systems include the use of biometric categorisation that categorises individuals based on their biometric data to deduce or infer, among others, their sex life, sexual orientation, and gender identity. In its proposal for the EU AI Act, the European Commission specifically acknowledges the challenges faced by trans people in their interactions with AI systems, for example as a result of biased AI facial recognition.

Together with the EU Digital Services Act's prohibition on targeted advertising by profiling consumers based on special categories of personal data, sexual orientation and health records (though not gender identity), which applies to providers of in-scope online platforms, the combination of EU's legislative initiatives disincentivises the profiling of individuals based on protected characteristics. The EU AI Act, however, allows for the processing of special categories of personal data when necessary for bias detection and correction, enabling AI developers to pursue algorithmic fairness.

High-risk AI systems

High-risk systems (including those that pertain to particularly sensitive areas like education, employment, and the provision of essential services) will have a range of obligations relating to risk assessment and mitigation, transparency, and record keeping, as discussed in our previous article on the EU AI act.

GPAIs

The EU AI Act also imposes specific duties on General Purpose AI systems (GPAIs), and further obligations on GPAIs that present systemic risks such as bias and discrimination. These obligations, while onerous on the providers and deployers of AI, may help guide the assessment and mitigation of risks in relation to marginalised communities.

UK regulation

The UK Government has confirmed that it has no plans to introduce specific AI legislation. However, various regulators have been tasked with producing guidance on their approach to AI by 30 April 2024, including an assessment of AI-related risks in their sector.

The Government's announcement was followed by the publication of the House of Lords' report on Large Language Models and Generative AI, which raises bias and discrimination as a concern, as well as AI regurgitating private data and posing difficulties for interpreting black-box processes, all of which are likely to disproportionately affect the LGBTQIA+ community. The report advocates for mandatory safety assessments of high-risk AI systems.

Other legislation may address some of the risks facing the LGBTQIA+ community. For example, the new Online Safety Act imposes varying duties on in-scope online services to assess and mitigate the risks of online harms occurring on their platforms, such as the risk of users encountering discriminatory and hateful content, as well as the risk of online harassment.

This patchwork of legislation may prove difficult to navigate. Nevertheless, international market participants are taking proactive steps to mitigate the risks posed by AI to the LGBTQIA+ community, such as removing gendered language from image tags and vetting training datasets for hate speech. Some developers seek to engage with the LGBTQIA+ community and incorporate their perspectives into the design and implementation of AI systems. This includes developing fairness frameworks that account for the fluid and diverse nature of sexual and gender identities.

For those seeking to proactively review their AI and fairness practices, the ICO has published guidance on explaining decisions made with AI and addressing fairness, bias, and discrimination in relation to AI systems.

We have also designed a general AI risk assessment questionnaire, which can be accessed via our AI Resource Centre.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else