High-Risk AI (Annex III)
AI systems under Annex III EU AI Act — obligations applicable from 02 August 2026 (Digital Omnibus proposal of 19 November 2025: postponement to 02 December 2027 — not yet adopted)
TL;DR
High-risk AI under Annex III of the AI Act refers to AI systems in 8 domains: biometric identification, critical infrastructure, education, employment/HR, essential private/public services, law enforcement, migration, administration of justice. Obligations become legally binding from 02 August 2026 (Digital Omnibus proposal of 19 November 2025: postponement to 02 December 2027 — trilogue ongoing, not yet adopted).
What is high-risk AI (Annex III)?
The 8 high-risk domains under Annex III:
- Biometric identification and categorisation
- Critical infrastructure (transport, water, gas, electricity)
- Education and vocational training
- Employment, HR management (recruiting, performance, promotion)
- Essential private and public services (credit scoring, insurance scoring, emergency triage)
- Law enforcement
- Migration, asylum, border control
- Administration of justice and democratic processes
Providers and deployers have different obligations. Providers: full obligations (Articles 9-15, QMS, conformity assessment). Deployers: use in accordance with intended purpose, input-data relevance review, logs, FRIA where applicable (Article 27).
Practical example
Examples of high-risk AI under Annex III No. 4(a) (recruiting): - HR screening tool that analyses applicant CVs - Algorithm for predicting employee performance - AI system for promotion recommendations The reversal of burden of proof under Section 22 of the Anti-Discrimination Act (AGG) applies where indications of algorithmic discrimination exist, irrespective of the AI Act deadline; AGG damages are typically 1–3 gross monthly salaries per claimant (Section 15 AGG).