Last month (December 2023), the International Organization for Standardization (ISO) published a new international standard (ISO/IEC 42001) which it dubs the world's first AI management system (AIMS) standard. It provides a voluntary framework for providers and users of AI solutions to help manage risk and demonstrate responsible use of AI.
Tech suppliers have become very familiar with customers asking for ISO27001 certification (an international standard for information security) and AI companies should expect to see similar requests in respect of this new standard in due course.
If you would like any support complying with the new standard, our technology team and our MDR Cyber service can help.
What kind of organisation should consider adhering to the Standard?
Any organisation of any size, in any industry which is a user, provider or developer of AI products or services.
What will adherence with the Standard provide?
It can be used to reassure customers, supplier, regulators and other interested parties that the organisation has understood and put in place effective AIMS to ensure that AI is used, developed and deployed responsibly.
Is the Standard prescriptive?
Subscribing organisations will audit themselves against their AIMS, but the standard's flexibility (similar to that of ISO27001) allows organisations to design their AIMS relevant to the organisation's context, strategic direction, AI-specific role and its responsibilities.
What are the key points to note?
- understand the technology's use, development and sales;
- consider the technology's unique features and environmental impact; and
- determine their strategic direction.
Internal and external factors (such as policies, personnel, finances and regulatory restrictions) should factor in to the above.
Objectives for the development, use or provision of AI should:
- be achievable;
- be capable of being measured, monitored, communicated and updated;
- consider resource availability, deadlines and roles.
Risk analysis and assessment
Scoping and assessment of risk should:
- refer to policies and objectives; and
- consider impact on rights and freedoms of individuals, groups, society, culture and the environment in the territory.
Systems and protocols to treat and mitigate risks should be applied to identified risks.
Policies and processes
Existing policies will need updating and new policies put in place which govern:
- how the AI can be used, developed and sold;
- restrictions on its use, development or sale;
- approvals and controls required throughout the lifecycle;
- consequences of failure to comply; and
- timelines for periodic reviews.
- Employees: must be aware of and understand their role in and the need for compliance with the AIMS;
- Suppliers: must be aware of the organisation's strategic objectives to align themselves, their roles and responsibilities with it;
- Customers: needs, expectations and concerns need to be understood and appropriately addressed; and
- Supervisory authorities: may need to monitor compliance with law, regulation and the standard.
Organisations should ensure that:
- decisions, actions and considerations are documented;
- documented actions are readable; and
- documents are made available to interested parties where required.
Performance and data management
Success of the technology will be dependent on data quality so details of its provenance, reliability, categories and intended use should be recorded. Resources should be diverted to correct and mitigate identified data risks and unintended performance.
Review and revision
The AIMS should be continuously improving and subject to reviews (both itself and the implemented policies and controls) at defined stages, and annually at a minimum. Impact assessments should also be conducted periodically and in response to problems which may arise.