The UK's AI White Paper can probably be most neatly summarised by their following statement: "rushed attempts to regulate AI too early would risk stifling innovation. Our approach aligns with this perspective." There is no new legislation and no new regulator.
Instead, the Government has offered a flexible approach to try and keep pace with rapidly changing technology. Its proposal is to implement a framework which is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy. These principles would be issued initially on a non-statutory basis and implemented by existing regulators through guidance. The framework would be supported by centralised functions in Government, which would monitor and assess the impact of the framework and facilitate co-operation between regulators and the international community.
For many, this approach will stand in stark contrast to the EU's rule-based approach being proposed in the EU AI Act. In comparison, the UK approach has some upsides (flexibility and a pragmatic approach) but a number of downsides (most notably a continuing lack of certainty).
The role of the central functions in Government would be critical and would require substantive investment in terms of resource and expertise. Some regulators may be under-resourced and lack AI experience to really deliver, and others could be too heavy handed in their approach without a clear steer on how they should implement the framework..
Whilst the Government's initial plans do not involve new legislation, it may in the future implement a statutory duty requiring regulators to have due regard to the principles, although it will not pursue this if the initial framework approach is successful. It may also consider appointing an independent body in the future to oversee the central functions.
The five principles are set out below.
Safety, security and robustness
AI has the capacity to autonomously develop new capabilities and functions, therefore regulators would need to assess safety and risk management in their sector and provide guidance on good cybersecurity and privacy practices.
Appropriate transparency and explainability
Information about the AI system, such as its purpose, would need to be communicated appropriately and decision-making processes would need to be explainable. This may involve the regulator collecting information about the nature and purpose of the AI system, the data being used, the logic and process used for the AI system, as well as setting out requirements such as product labelling. Due to the nature of AI systems, explainability may be difficult in some cases.
Regulators would need to ensure that AI systems are designed, deployed and used in a fair way. They should not undermine legal rights, discriminate or create unfair market outcomes.
Accountability and governance
AI has the capacity to make decisions autonomously, so it is important to establish ownership and accountability. Regulators may need to provide guidance on how to demonstrate accountability and governance e.g. impact assessments or audits.
Contestability and redress
AI systems may reproduce biases or cause harm. Therefore, it is important that outcomes can be contested. Regulators would need to provide guidance with relevant information on where to direct a complaint or dispute for those affected by AI harms.
The Government suggests using practical tools to aid compliance with the principles. It will launch a portfolio of AI assurance techniques in Spring 2023 and thinks regulators should encourage adoption of technical standards as part of the framework.
The white paper identifies that some technologies (most notably, major emerging technologies such as autonomous vehicles and Large Language Models like ChatGPT) are unlikely to be directly ‘caught’ within the remit of any single regulator. It recognises that there is further work to be done to identify these gaps and address them through the centralised functions. It also identifies that there will be risks which cut across sectors and joint guidance from regulators may be required.
Monitoring, assessment and feedback
In order to successfully adapt and develop the framework, monitoring and feedback would be needed to spot issues quickly. This would mean gathering data, providing advice on improvement and supporting regulators to undertake internal monitoring and evaluation.
Support coherent implementation of the principles
As the approach involves multiple regulators, oversight over how the principles are being applied would be needed to ensure consistency. This would involve developing central guidance for regulators, identifying any problems for regulators such as insufficient powers and spotting conflicts between regulators.
Risk assessment across sectors
To support a coherent approach, risk assessment would be needed centrally across sectors. This would involve developing an AI risk register, monitoring known risks, identifying and prioritising new risks, supporting joint guidance from regulators and identifying risks that may fall in gaps between regulators.
Support for innovators
The Government proposes to establish a multi-sector AI sandbox. This would assist innovators to overcome the regulatory complexity and get their products on the market quicker.
Education and awareness
This would involve running awareness campaigns for consumers, businesses and the general public and encouraging regulators to promote them.
In order to anticipate opportunities and risks, horizon scanning would be needed to monitor developments in AI so the framework can respond. This may involve gathering insight from stakeholders in the industry or academia, in order to maximise opportunities.
Align with international rules
For businesses operating overseas, it is important for the UK to align internationally on approaches to risk management, regulation and technical standards. This would involve monitoring developments and supporting cross-border collaboration.
In addition, the white paper articulates the complexity of trying to allocate responsibility across existing supply chain actors within the AI life cycle and, therefore, proposes not to intervene at this stage. Contracts will need to continue to do the heavily lifting of allocating responsibility here.
The Government has launched a consultation to address questions set out in the white paper on its proposals. The consultation closes on 21 June 2023.
Think Digital Partners