Prohibited AI Practice (Article 5 EU AI Act)

8 prohibited applications since 02.02.2025

Practitioner's note: This article is practice-oriented compliance documentation, not legal advice. We are a compliance specialist, not a law firm. For legally binding information please consult a licensed lawyer.

TL;DR

Article 5 EU AI Act lists 8 prohibited AI practices that have been unlawful EU-wide since 02.02.2025: manipulation by subliminal techniques, exploitation of vulnerabilities, social scoring by public authorities, real-time biometrics in public spaces (with exceptions), predictive policing, image databases via untargeted scraping, emotion recognition in the workplace, biometric categorisation. Fine: up to EUR 35 million / 7% of turnover.

What is a Prohibited AI Practice (Article 5 EU AI Act)?

The 8 prohibitions in detail:

Practical example

Commission guidelines C(2025) 884 final (04.02.2025) specify the prohibitions. Clearly prohibited examples: an HR tool that detects employees' mood under stress; an insurer that uses biometrics to categorise by 'origin'; a school tool that manipulates pupils' behaviour.

Frequently asked questions

Do the prohibitions also apply outside the EU?
Yes, provided EU citizens or the EU market are affected. This follows the GDPR marketplace principle.
What about ChatGPT-based employee sentiment analysis?
Where emotions are detected in the workplace — prohibited. Where it is purely sentiment analysis of texts relating to work outputs — permitted with GDPR compliance.
Law enforcement exceptions?
To be interpreted narrowly — only for serious offences plus judicial authorisation. In practice: police authorities must document implementation records.

See also