(155)
|
In order to ensure that providers of high-risk AI systems can take into
account the experience on the use of high-risk AI systems for improving their systems and the
design and development process or can take any possible corrective action in a timely
manner, all providers should have a post-market monitoring system in place. Where relevant,
post-market monitoring should include an analysis of the interaction with other AI systems
including other devices and software. Post-market monitoring should not cover sensitive
operational data of deployers which are law enforcement authorities. This system is also key to
ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being
placed on the market or put into service can be more efficiently and timely addressed. In this
context, providers should also be required to have a system in place to report to the
relevant authorities any serious incidents resulting from the use of their AI systems, meaning
incident or malfunctioning leading to death or serious damage to health, serious and
irreversible disruption of the management and operation of critical infrastructure,
infringements of obligations under Union law intended to protect fundamental rights or serious
damage to property or the environment.
|