Mishcon de Reya page structure
Site header
Menu
Main content section
Artifical intelligence AI abstract

The impact of AI on pro bono

Posted on 7 November 2025

One of the key themes of this year's Pro Bono Week, is 'stories of impact'. This article looks to explore the current ways that Artificial intelligence (AI) has significantly impacted pro bono work and the potential risks associated with using AI over traditional legal services.  

How is AI currently being implemented and how does it enhance pro bono work?  

Below are some interesting examples of software that has been recently created to further pro bono work in different areas. These projects were all nominated for the AI for Good Impact Award for Pro Bono Collaboration.  

  • Defensa+AI - This was developed by SidLabs Online LLP, in partnership with the Sociedad Peruana de Derecho Ambiental. It is designed to combat environmental crime in the Peruvian Amazon (a vulnerable ecosystem). It detects illegal activities like logging or mining, and sends data and real time alerts so that authorities are able to act strategically and swiftly to prevent further crime. 
  • LexAid - This tool was also developed by SidLabs Online LLP, in partnership with AsyLex. It is a platform that generates legally grounded appeal drafts tailored to individuals who have suffered severe human rights violations and who now seek asylum in Switzerland. The drafts are then reviewed by lawyers, and the case is able to move forward in a streamlined fashion without defaulting on accuracy.  
  • Chatbot Sophia – This has been developed by Spring ACT and is the first "global AI-driven chatbot designed to support people affected by domestic violence". The chatbot, offers free 24/7 service in more than 20 languages. It helps survivors understand their legal rights, explore safe options and gather potential evidence. 

Harnessing AI has proven to not only be an exciting new step to further pro bono work but also as an essential step in maintaining existing levels of humanitarian action. This is exemplified by the UN Refugee Agency who have had to embrace AI in an attempt to do more with less resource due to the recent US Government cuts on foreign aid. The organisation has detailed the ways in which it is looking to use AI in their recently published AI Approach statement. This includes the development of large language models to support the transcription of interviews for local low-resource languages such as Kurdish (Sorani). The model will also support asylum and resettlement procedures by replacing manual transcription services with automated and will significantly reduce the delay.  

Can AI also assist in bridging the access to justice gap, and if so, what are the risks associated with it?  

The access to justice gap is one of the most pressing challenges facing modern legal systems, with millions of individuals unable to afford legal representation or access basic legal services.  

AI has proven to be instrumental to anyone looking to learn more about their rights and their position. The success of Chatbot, Sophia, for example, has demonstrated this. But users of emerging AI solutions should be aware of risks associated with the use of using AI over traditional legal services. Some of of them are listed below: 

  • Inaccuracy - AI systems may provide inaccurate or incomplete legal guidance, especially in complex cases requiring nuanced analysis. Individuals without legal training often cannot identify when AI-generated advice is insufficient or incorrect, potentially leading to harmful decisions. 
  • Lack of transparency - Many AI algorithms function as "black boxes," making it difficult for users to understand how conclusions were reached or to challenge errors in the output. This lack of transparency is particularly problematic in a legal context where reasoning is crucial.  
  • Bias - AI systems trained on historical legal data and may perpetuate existing biases and inequalities, further disadvantaging marginalised groups. 
  • Lack of privacy - Users of many free or low-cost AI legal tools risk sharing sensitive personal information with platforms lacking adequate data protection or that monetise user data. Client privilege typically does not apply to AI interactions, leaving users vulnerable to disclosure of confidential information. 
  • Potential introduction of a digital divide - The availability of AI legal tools may justify reductions in traditional legal aid funding, potentially worsening outcomes for vulnerable populations. Those most in need often lack the technological literacy or access to devices and internet connectivity required to use AI tools effectively.  

AI is clearly making a meaningful impact in pro bono work, with strides being made where technologists and lawyers have collaborated to solve some specific challenges.  Increasingly, those lawyers in Firms, like Mishcon with access to secure legal AI applications are also able to use them effectively, to streamline their work.  However, lawyer review is still needed (as seen with the LexAid product and the risks outlined above).  Looking to the future, having a lawyer in the mix remains valuable, not least because it is they who are best placed to call out where the law needs to change if it is no longer meeting needs of its society. This is why the work of lawyers will always be a crucial aspect of pro bono work and should be used to increase efficiency with lower costs in mind and bridge some of the gap in the access to justice. 

 

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else