(44)
|
There are serious concerns about the scientific basis of AI systems aiming to
identify or infer emotions, particularly as expression of emotions vary considerably across
cultures and situations, and even within a single individual. Among the key shortcomings of
such systems are the limited reliability, the lack of specificity and the limited
generalisability. Therefore, AI systems identifying or inferring emotions or intentions of
natural persons on the basis of their biometric data may lead to discriminatory outcomes and can
be intrusive to the rights and freedoms of the concerned persons. Considering the imbalance of
power in the context of work or education, combined with the intrusive nature of these systems,
such systems could lead to detrimental or unfavourable treatment of certain natural persons or
whole groups thereof. Therefore, the placing on the market, the putting into service, or the use
of AI systems intended to be used to detect the emotional state of individuals in situations
related to the workplace and education should be prohibited. That prohibition should not cover
AI systems placed on the market strictly for medical or safety reasons, such as systems intended
for therapeutical use.
|