Article 5(1)(f) AI Act provides that “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and educational institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons” is prohibited. This blogpost focuses on the third element: the use of AI systems to infer emotions in educational institutions. The choice is deliberate, as it is in these contexts that legal and ethical questions arise most saliently – probabilistic claims about a student’s inner state directly affect their daily lives and are especially prone to foreseeable misuse.
An article my Mateusz Kupiec
Why a prohibition at all?
Inferring emotions involves using signals such as facial expressions, tone of voice, or gaze to generate probabilistic statements about how a person is feeling. Typically produced by systems in affective computing – an area focused on designing technology to detect and respond to human emotions – these inferences serve as tools to support teachers and maximize their students’ potential. However, emotions are complex, context-dependent, and culturally specific – a smile may signal joy, embarrassment, or irony – so translating subtle cues into fixed categories like “attentive” or “frustrated” carries an inherent risk of misinterpretation.
The AI Act does not attempt to define a level of “good enough” emotion recognition. It takes an alternative path and excludes ex ante practices considered intrinsically intrusive in contexts characterized by structural power imbalances. Inferring inner states in educational institutions is treated as such a practice. Recital 44 AI Act highlights that emotion inferences are context-dependent, error-prone, and prone to governance effects that individuals especially cannot meaningfully resist.
Some commentators criticize the EU legislator for grounding the ban in Article 5(1)(f) AI Act too narrowly on the weak scientific basis of emotion-recognition systems. According to this view, once technical flaws are resolved, the normative foundation could disappear, resulting in a loophole. This interpretation is not shared here. The prohibition should not be reduced to current technological limitations. Its primary rationale lies in the structural imbalance of power between institutions and individuals. Turning emotions into governance signals entrenches asymmetry, enabling invisible and difficult-to-contest surveillance. Even the most accurate systems would not change this. The law therefore establishes a boundary not only because current systems lack reliability, but also because mediating inferences about emotions through AI in workplaces and educational institutions undermines dignity and increases dependency – a concern explicitly noted in Recital 44 AI Act.
The scope of Article 5(1)(f) AI Act
The prohibition, as analysed here, concerns the use of AI systems to infer emotions of a natural person in educational institutions. Five elements are particularly demanding in practice, and the European Commission has sought to clarify them in its guidelines on prohibited AI practices: “use”, “inference”, “biometric data”, “emotions”, and “educational institutions”.
First, “use” must be read functionally. What matters is how the system is relied upon once embedded in institutional routines. The concept extends to misuse and reasonably foreseeable misuse.
Second, “inference” must be read together with related functions such as “detection” and “recognition.” The AI Act distinguishes these three, but the wording of Article 5(1)(f) AI Act refers only to inference. At first glance, this could suggest a narrow scope. Yet both the Recital 44 AI Act and the Commission’s guidance emphasize that recognition – matching biometric patterns, such as facial expressions or voice, to predefined categories – forms the first step in a broader process of machine inference. Detection, recognition, and inference are not easily separable once the output is used to draw conclusions about a person’s inner state. This means that Article 5(1)(f) may apply whenever an AI system processes biometric data in a way that leads to automated assessments of emotions or intentions in educational institutions.
Third, the data that enables such inferences must be biometric in character. Otherwise, the practice does not fall under Article 5(1)(f). Inferring emotions from text does not count. The AI Act defines biometric data broadly, encompassing personal data derived from the technical processing of physical, physiological, or behavioural characteristics, without requiring unique identification. The GDPR definition is narrower, linking biometric data to identification or authentication. In educational contexts, embeddings derived from face, voice, or gaze that categorise a person suffice to bring use of a system within the prohibition, even if deployers argue that no identification is performed in the strict sense. By contrast, so-called soft biometrics such as clothing or hairstyle usually do not qualify on their own as biometric data under either the AI Act or the GDPR.
Fourth, the scope of inference extends beyond emotions. The wording of Article 5(1)(f) AI Act explicitly covers both emotions and intentions. While the Act lists emotions and attitudes, it does not clarify what constitutes “intentions”. This ambiguity is particularly relevant for remote-assessment tools that purport to detect cheating or rule evasion through eye-tracking, micro-expressions, or other biometric cues. When outputs derived from such data assign to a person a particular aim or disposition – for instance, “likely intent to glance off-screen” – a protective interpretation places them within the scope of Article 5(1)(f). The Commission’s own examples in educational assessment contexts reinforce the need for caution and weigh against treating such systems as innocuous “behavioural analytics”.
Finally, “educational institutions” should be read broadly covering both public and private entities, usually operating under a licence or accreditation of a competent national authority. Academic commentary emphasizes that the notion extends across all stages of learning – from nurseries and schools to universities, vocational training centres, and lifelong learning programs. The decisive criterion is the existence of institutional authority over learners, reflected in the imbalance of power and subjection to the institution’s rules and requirements. At the same time, Article 5(1)(f) AI Act applies only where AI outputs on emotions or intentions can affect the learner’s legal or factual situation. Purely didactic or training uses – such as role-play exercises for actors or teachers—fall outside its scope, provided the inferences do not influence institutional decision-making. The boundary, however, may be blurred whenever outputs, though generated for practice, are likely to be repurposed in ways that shape a learner’s standing within the institution.
Exceptions and their burdens
Article 5(1)(f) contains two narrow gateways: medical and safety. Medical refers to activities centred on healthcare functions for the person concerned, such as prevention, diagnosis, or treatment. A general well-being dashboard for a class does not meet that standard. Safety requires a concrete and real risk to life or health. General institutional security interests or the protection of property are insufficient. Even when a deployer invokes an exception, the operative standards are necessity and proportionality. This means demonstrating why this practice is necessary for mitigating this risk, why it is limited in time and scope, and why no less intrusive alternative would achieve the same goal. These are questions of proof, not assertion.
The accuracy challenge under the GDPR
Even if the use of emotion-recognition systems in educational institutions were permitted under the medical or safety exceptions of Article 5(1)(f) AI Act, the inferences they generate may themselves constitute personal data and thus fall within the scope of the GDPR. The AI Act does not replace data protection law. Article 2(7) AI Act makes clear that whenever personal data is processed, the GDPR continues to apply. In other words, obligations arising from both regulations must be satisfied. Two principles are decisive here. Fairness in Article 5(1)(a) GDPR requires that processing respects the reasonable expectations of individuals and avoids exploiting power imbalances. Accuracy in Article 5(1)(d) GDPR requires that personal data reflects reality sufficiently for the purpose of processing. Emotion-recognition systems must satisfy both.
Accuracy does not mean perfection, but it does mean that data must be good enough for the controller’s purpose at hand. Emotion recognition struggles with this standard. Systems can capture outward expressions such as a smile or a frown. Converting those into claims about inner emotional states – boredom, frustration, contempt – is difficult to verify against any “ground truth.” The data accuracy principle protects against the harms of misrepresentation. An incorrect grade or unfair disciplinary action, resulting from a misread facial expression, can be as damaging as an error in factual records. Because inner feelings are inherently unverifiable, the margin of error when using emotion inference for safety or medical reasons is simply too wide, making compliance with Article 5(1)(d) GDPR very hard to defend.
Conclusion
Through Article 5(1)(f) AI Act, EU law establishes a clear boundary: the inner states of students must not be transformed into institutional decision-making inputs. The rule is grounded in the protection of dignity and the limitation of institutional power, with only narrow exceptions for medical and safety purposes. Despite the Commission’s extensive guidance, stakeholder feedback indicates that questions remain regarding the scope, application, and exceptions.
This post is based on a more detailed academic analysis of Article 5(1)(f) AI Act in educational institutions, which I discuss in greater depth in a book chapter published in Polish: