(75)
|
Technical robustness is a key requirement for high-risk AI systems. They
should be resilient in relation to harmful or otherwise undesirable behaviour that may result
from limitations within the systems or the environment in which the systems operate (e.g.
errors, faults, inconsistencies, unexpected situations). Therefore, technical and organisational
measures should be taken to ensure robustness of high-risk AI systems, for example by designing
and developing appropriate technical solutions to prevent or minimise harmful or otherwise
undesirable behaviour. Those technical solution may include for instance mechanisms enabling the
system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies
or when operation takes place outside certain predetermined boundaries. Failure to protect
against these risks could lead to safety impacts or negatively affect the fundamental rights,
for example due to erroneous decisions or wrong or biased outputs generated by the AI system.
|