Mishcon de Reya page structure
Site header
Main menu
Main content section
Pink abstract technology lights

AI in the workplace: Generative AI

Posted on 10 October 2023

Generative AI can create diverse outputs from user prompts, offering efficiency benefits for employees. However, the technology presents risks in relation to data protection, intellectual property, discrimination, and inaccurate outputs. We recommend that employers implement a generative AI policy to guide employees on how to avoid these legal and reputational risks.  

What is generative AI?

Generative AI is the process of AI algorithms generating or creating an output from user instructions or prompts, based on the datasets they are trained on. This could take the form of, for example, text, images, video, code, data or 3D renderings.

ChatGPT has "viral-ised" generative AI, recently becoming the fastest-growing consumer app in history, hitting 100 million users in the first two months after its launch.

Based on billions of data points across the internet, ChatGPT is able to interact with users in a simple conversational way, reacting to user prompts, understanding and generating text in a human-like fashion. Other popular examples of generative AI include Dall-E, Midjourney, Github, Co-Pilot and Stable Diffusion.

Practical uses of generative AI

Generative AI tools can perform a broad range of functions, including:

  • contributing a viewpoint in response to a user prompt
  • creating textual output based on a given subject
  • summarising long documents or articles (in a specified number of words if required)
  • providing answers of a specific length (e.g. maximum 500 words), or style (e.g. "corporate", "empathetic", "light hearted" etc.)
  • writing code for a written function

Generative AI can perform these tasks in a fraction of the time it would take a human, saving significant time and efficiency for users, while adopting a consistent tone and style (as instructed by the user).

Use of generative AI tools in the workplace therefore carries significant potential benefits for businesses. However, before permitting staff to use them, employers should consider the risks associated with such publicly available tools, and the ways in which such risks might be mitigated.

What are the risks of using generative AI?

In theory, the output of the generative AI solution triggered by the prompts and user inputs can create new content, capable of being owned by the user.

However, it comes with a degree of risk across a range of legal disciplines, such as data protection, intellectual property, and discrimination. Careless or inappropriate use of generative AI can also cause reputational embarrassment, resulting from "hallucinations", as explained further below.

Data protection

User data is often stored and reused for model training. There is a concern that this could lead to possible exposure of confidential/proprietary information to the public.

For example, if an organisation were to search: "what is the best approach for Company X Ltd to make three redundancies", another user searching, "is Company X Ltd a good place to work?", could conceivably receive a response from the generative AI tool stating 'Company X Ltd recently considered making three employees redundant'.

Intellectual Property

To create output, generative AI solutions must first be trained on billions of images, text files and videos from the internet.

The raw data being processed has been created by an author and then reimagined and repurposed by AI to generate the output. Although it is not the same raw data, the new output may have elements of the author's original work, which could carry IP infringement risks and raises questions as to who owns the output of the data.

For more information, download our Generative AI & Intellectual Property guide here.

Explore our Generative AI – Intellectual property cases and policy tracker here.

Discrimination

Bias in AI systems can arise from the data used to train the model, the algorithms and techniques employed, and the societal and cultural factors embedded within the training data.

Since AI models like ChatGPT learn from large amounts of text, they can inadvertently learn biases present in the training data. For example, if text contains biased language or reflects societal stereotypes, the model may generate responses that perpetuate or amplify those biases. The resulting output from generative AI is therefore susceptible to bias and misinformation.

Hallucinations

Generative AI tools sometimes produce plausible-sounding but factually inaccurate output, known as "hallucinations". It is therefore critical that users check the output of such tools and do not simply take it at face value.

General principles for using generative AI

Employees should be advised to always exercise caution when using generative AI models and should carefully consider what rights and restrictions apply to information and whether information can or should be shared.

An employer has no oversight over how data which is entered into web-based generative AI tools is used. For example, if employees input confidential sensitive information or personal data into such tools, it could not only amount to a legislative breach, but may also have damaging consequences for individuals, groups of individuals or an organisation more generally.

Liability and risk, particularly for generative AI solutions that have been mass released on a free of charge basis, is typically contractually passed on to the user with broad disclaimers. Therefore, generative AI outputs should not be used as an employee's only source of information on any given topic.

Why it is important to implement a generative AI policy

We recommend that any organisation which uses (or proposes to allow the use of) generative AI tools should have an appropriate policy in place for staff. This will ensure that the company obtains the advantages of using generative AI, while reducing the associated risks.

A good generative AI policy will therefore provide guidance for employees on:

  • how to avoid confidentiality and data breaches;
  • how to avoid using inaccurate information created by a generative AI system, whether that be a hallucination, or potentially discriminatory output;
  • how to avoid infringing the intellectual property rights of others;
  • when to highlight to colleagues when a generative AI tool has been used to create a piece of content; and
  • responsible and ethical use of a generative AI tools more generally.

If you would like to discuss generative AI in the workplace, and implementing a generative AI policy in more detail, please contact Daniel Gray or another member of our Employment team.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else