In the recent case of Harber v The Commissioners for His Majesty’s Revenue and Customs, Ms Harber presented the court with nine previous rulings to support her case, all of which were found to be 'hallucinations' by a Generative AI programme.
This highlights the need for employers to implement effective Generative AI policies to avoid the possible embarrassment and reputational damage of employees relying on fabricated information produced by Generative AI, among other risks.
Facing a tax penalty in the first-tier tribunal, the appellant Ms Harber argued that she had a 'reasonable excuse' for failing to notify HMRC of her capital gains liability. She presented nine cases to the tribunal, reflecting what she believed to be nine decisions supporting her claim.
At a reconvened hearing, it was found that the cases cited by Harber displayed similarities to real life cases on their facts, but the parties and dates could not be identified. Additionally, the outcomes of the cases relied on by Harber supported her case, when in reality the outcomes of the real-life comparable cases were not favourable to her case.
The Tribunal accepted that the cases that had been relied on were fabricated, though Harber was not aware of this. The Tribunal found as a fact that the cases had been created 'by an AI system such as ChatGPT'.
This follows the New York case of Mata v Avianca in July 2023 where a New York Lawyer used the tool Chat GPT to identify 'bogus' cases to support his claims, which similarly appeared to be hallucinations of Chat GPT.
The US lawyer who used the tool told the court that he was unaware that the output of Chat GPT could be false.
What this means
The above cases are indicative of the challenges presented by the growing use of Generative AI solutions across all sectors of work and life.
As of July 2023, Deloitte reported that in the UK four million people had used Chat GPT for work purposes, with 28% of those using it weekly. The highest demographic of Chat GPT users globally are those aged between 25 and 34.
The benefits of Generative AI tools are clear to all who have experimented with Generative AI programmes. It is evidently increasingly tempting for employees to utilise Generative AI's ability to create human like textual outputs to tasks such as content creation, research projects, or succinctly summarising long articles (amongst many other potential uses).
The above cases should serve as a reminder that there are significant risks associated with employees using Generative AI tools without guidance on appropriate use and the associated risks from their employer. In addition to reputational damage and embarrassment, risks accruing to an employer include breach of copyright, breach of data protection laws and risks of discrimination.
While generated output may be plausible sounding, employees should be reminded that responses may be inaccurate and any generated information that is used for business purposes should be fact checked beyond reasonable doubt.
We recommend that all employers have a Generative AI policy in place and carry out additional training. The policy should act to guide employees around appropriate uses of Generative AI in the workplace, with additional safeguards.
If you would like to discuss implementing a generative AI policy in your workplace or additional training around the use of Generative AI, please contact the Employment team.