As many people are all too well aware, recruitment can be a long and drawn-out process. In order to maximise efficiency and find the best talent, AI and machine learning are increasingly being relied upon to assist with recruitment. In this fast-evolving world, buying or selling a recruitment business powered by AI presents unique challenges and opportunities. Whether you're a recruitment business utilising AI, looking to sell or a traditional recruiter looking to acquire AI functionalities, you'll need to understand what warranties to look for or ask for in a transaction. In this blog, we explore the key areas to consider.
Risk categorisation under the EU AI Act
The EU AI Act (the Act) governs the development and use of AI. It will become fully applicable from 2 August 2026, but certain requirements regarding prohibited practices and AI literacy are already in force. Non-EU businesses will be caught by the Act if they place AI systems in the EU or if the output is used by people within the EU.
The cornerstone of the Act is the risk-based approach, which categorises AI systems (the way AI models are integrated into functional solutions to address specific needs) into unacceptable risk, high risk, and limited risk. In many instances, the key element here is not the underlying AI system itself, but the use case for which the AI system is deployed for. Providers of AI systems often do not control how their AI system and its components are adapted or used downstream, and this can create difficulties when allocating responsibility. It is therefore important to understand how the AI system is being deployed and what the associated risk is before working out how to mitigate that risk. The use of an AI system in recruitment or job selection is an example of deployment that could be considered high-risk because it has the potential to materially influence the outcome of decision-making and could pose a significant risk to an individual's fundamental rights. AI models which may be integrated into downstream systems or applications, such as foundation AI models (large-scale models that provide a base for more specialised applications) and generative AI models (models capable of generating content) are also categorised by risk and appropriate due diligence should be done on these too.
Given the use cases for which AI is deployed in the recruitment industry, Sellers and Buyers alike need to be keenly aware of the potentially heightened associated risk. As such, Buyers will want to ensure that the Target has done appropriate due diligence in respect of incorporated third party AI tools before integrating it into its business and has appropriate policies and procedures in place in respect of the use of these AI tools.
AI regulations: Deployer or provider?
The Act identifies a number of different roles in connection with AI systems and AI models, but the key ones likely to apply are 'provider' and 'deployer'. Different responsibilities and liabilities apply to each role, and this will impact what warranties are needed. As such, it is important to first identify what role is being played by the user. This can be a complex question to answer and will depend on the particular context, such as, who initially developed an AI model, who developed the AI system, or integrated an AI model. It is also possible for a party to be both a provider and deployer, for example, if a business developed an AI system that integrates an AI model it will be a provider of that AI system and if it then uses the AI system it developed, it will also be a deployer.
Parameters, hyperparameters and weights: AI and Intellectual Property (IP)
IP plays a pivotal role in any business transaction where AI is involved, serving as a key asset that can significantly influence the deal's value. As well as clear ownership of traditional IP (including patents, copyright, trademarks, and trade secrets), free from any pending disputes or infringements, Targets should also be able to identify the ownership of AI specific IP architecture, including parameters, hyperparameters, weights and their related artefacts such as training or evaluation datasets, code (preprocessing, inference, training scripts), documentation, metadata and supporting tools. These elements work together to define how a model learns from data and makes predictions (for example) and can significantly impact a model's performance and accuracy. An IP audit will enable parties to verify the scope and validity of these rights, as well as any licensing agreements (e.g. licensed datasets from a third-party supplier) or collaborations that could impact IP ownership or usage rights. The potential for future innovation should also be considered, including how the existing IP and any AI generated assets might be leveraged or expanded. Sellers benefit from a well-documented IP strategy, which can enhance the business's value, while Buyers should evaluate how the IP aligns with their strategic objectives.
For a detailed overview of navigating AI and IP, please download our guide: Generative AI and IP: Navigating opportunities and risk.
Data Protection
AI models are trained on large datasets, which may include proprietary, user-generated or publicly available data. Where a business owns proprietary AI technology, a key question is whether it has legally obtained or has the right to use the content it used to train AI models. If the Target lacks clear rights to the data used for training, the resulting AI models could be legally vulnerable. Central to this consideration is ensuring compliance with data protection laws, including GDPR and the Data Protection Act 2018. For example, a Target may use data collected from customers/users well before it had contemplated developing its AI technology, such that the privacy policies in place at the time of collection may not have notified the user of the Target’s intention to use the data for that purpose. Any use made by the Target of data for a purpose which it has not received appropriate consent may lead to a breach of data protection laws.
In a recruitment context, data flows are complex with multiple players involved and working out whether you are a "controller" or "processor" or even both will be imperative.
Sellers and buyers alike should also be aware of the collection and use of special category data, including gender, race, or religion. While this can be used to promote diversity and inclusion within their organisations, by failing to account for the risks or considering the appropriate legal basis involved with the inference of special category data, the recruiter could risk breaching data protection laws and facing reputational and financial consequences.
Learn more on how to navigate the challenges of inferred special category data within recruitment.
Discrimination
Using AI in recruitment can also give rise to significant discrimination risks under the Equality Act 2010. If an algorithm (intentionally or not) filters out candidates with certain protected characteristics such as race, sex or age, it may well lead to unlawful direct or indirect discrimination. Historical data tainted by bias could perpetuate unfair hiring practices, potentially leaving recruiters vulnerable to legal claims and significant reputational damage. Ensuring that AI tools are regularly tested, audited for bias and calibrated to treat all candidates fairly can help to mitigate these risks.
For a more detailed guidance, please reach out to a member of the team.