The European Commission established the High Level Expert Group on Artificial Intelligence to prepare a set of AI ethical guidelines and investment recommendations. These voluntary guidelines are addressed to all stakeholders designing, developing, deploying, implementing, using or being affected by AI.
The Group concludes that there are three components for "Trustworthy AI" which should be met throughout an AI system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations, (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, should not cause unintentional harm and should perform in a safe, secure and reliable manner.
The following ethical principles should be respected in the development, deployment and use of AI systems:
- develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. Any AI practitioners should always strive to adhere to these principles,
- pay particular attention to vulnerable groups and to asymmetries of power or information, including ensuring that individuals are free from unfair bias, discrimination and stigmatisation and that individuals can contest and seek effective redress against any decisions made by the AI system, and
- acknowledge that AI systems pose certain risks and may have negative impacts which may be difficult to anticipate, identify or measure (and include measures to mitigate these risks when appropriate and proportionate to the magnitude of the risk).
The development, deployment and use of AI systems should meet the following non-exhaustive requirements for Trustworthy AI: (1) human agency and oversight (including giving users the knowledge to comprehend AI systems and to override a decision made by a system), (2) technical robustness and safety (including resilience to attack, a fall back plan, accuracy and reproducibility), (3) privacy and data governance (including respect for privacy and integrity of data), (4) transparency (including traceability, explainability and being identified as AI systems), (5) diversity, non-discrimination and fairness (including the avoidance of unfair bias, accessibility and stakeholder participation), (6) environmental and societal wellbeing (including sustainability, social impact and democracy) and (7) accountability (including auditability, minimisation and reporting of negative impacts, trade-offs and redress).
The Guidelines provide detailed explanations of each requirement, and suggest technical and non-technical methods to achieve each of them.
The Group also emphasises the importance of communicating, in a clear and proactive manner, the AI system's capabilities and limitations and facilitating the traceability and auditability of the systems. The Guidelines include a draft, non-exhaustive list of steps which AI practitioners should take to achieve Trustworthy AI - the revised list will be published in early 2020.
The Guidelines should be considered throughout the lifecycle of the AI system, with ethics a core pillar for developing AI, "one that aims to benefit, empower and protect both individual human flourishing and the common good of society".
The Group has published its second deliverable, the AI Policy and Investment Recommendations, and additional sectorial guidance, complementing the Guidelines, will be explored in the future.