High-Risk AI under Annex III: Classification Workbook
TL;DR
- 8 high-risk areas under Annex III of the EU AI Act
- HR/recruiting (No. 4) is the most common SME use case — the Section 22 AGG reversal of the burden of proof applies NOW, regardless of the AI Act deadline
- Credit scoring + insurance scoring (No. 5) — FRIA obligation from 02 August 2026 unchanged
- Annex III obligations legally binding from 02 August 2026 (Digital Omnibus proposal of 19 November 2025: postponement to 02 December 2027 — trilogue ongoing, not yet adopted)
- Article 6(3) exceptions to be interpreted narrowly — when in doubt, classify as high-risk
1. 8 high-risk areas under Annex III
Short answer: Annex III of the EU AI Act lists eight high-risk areas: (1) biometric identification and categorisation, (2) critical infrastructure, (3) education and vocational training, (4) employment and personnel management, (5) essential private and public services (e.g. credit scoring), (6) law enforcement, (7) migration/asylum/border control and (8) administration of justice and democratic processes.
| No. | Area | Typical examples |
|---|---|---|
| 1 | Biometric identification + categorisation | Facial recognition, voice recognition, biometric classification |
| 2 | Critical infrastructure | Traffic management, water/electricity/gas supply |
| 3 | Education + vocational training | Assessment of pupils/students, plagiarism detectors with grading effect |
| 4 | Employment, personnel management | HR screening, performance management, promotion recommendations |
| 5 | Essential private + public services | Credit scoring, insurance scoring, emergency triage |
| 6 | Law enforcement | Offender risk assessment, predictive policing |
| 7 | Migration, asylum, border control | Visa-assessment algorithms, asylum routing |
| 8 | Administration of justice + democratic processes | Case-law research algorithms, election influencing |
2. Decision tree for classification
Short answer: Classification as high-risk AI follows six steps: describe the use case, check the Annex III area, where applicable narrowly review the Article 6(3) exception, clarify your role as provider (Article 3(3)) or deployer (Article 3(4)) and document the respective obligations — providers with full Annex IV documentation and conformity assessment, deployers with input-data relevance review, logs and, where applicable, FRIA.
- Describe the use case: What exactly does the AI system do? Which data are input, which are output?
- Check Annex III: Does the use case fall under one of the 8 areas?
- If yes → review Article 6(3) exception: 'specific, narrowly defined preparatory task'?
- If no → high-risk confirmed
- Clarify role: are you a provider (Article 3(3)) or deployer (Article 3(4))?
- Document obligations: provider (full Annex IV documentation, conformity assessment) vs. deployer (input-data relevance, logs, possibly FRIA)
3. Article 6(3) exceptions
Article 6(3) can only be applied narrowly. High-risk classification does not apply if the system EXCLUSIVELY:
- performs a narrowly defined preparatory task (e.g. sorting)
- or fulfils a limited task within the use case
- or performs a specific activity that only supports (does not replace) human action
IMPORTANT: If the system generates recommendations or assessments that may 'materially influence' human decisions — high-risk despite Article 6(3).
4. Special case HR/recruiting (Annex III No. 4)
Short answer: In the HR area, evaluative AI systems (CV match scoring, performance scoring, workforce planning with promotion/termination recommendations) are high-risk AI under Annex III No. 4. Pure information systems such as HR FAQ chatbots, newsletter personalisation or specific appointment-booking bots do not fall under this (potentially Article 6(3) exception). In addition, the Section 22 AGG reversal of the burden of proof already applies now where there are indications of discrimination.
| System | High-risk? | Reasoning |
|---|---|---|
| CV match scoring tool | Yes | Assessment of applicants |
| HR chatbot for FAQ | No | Pure information |
| Performance scoring system | Yes | Employee assessment |
| Workforce planning algorithm | Yes | Promotion/termination recommendations |
| HR newsletter personalisation | No | Marketing context |
| Appointment-booking bot for applicants | No (Article 6(3)) | Specific, narrowly defined task |
5. Special case credit/insurance scoring
Annex III No. 5 lit. b (creditworthiness assessment), lit. c (life/health insurance scoring). Obligations apply from 02 August 2026 (FRIA Article 27) — NOT affected by the Digital Omnibus proposal (19 November 2025). Full Annex III obligations also from 02 August 2026 (DO proposal: postponement to 02 December 2027 — not yet adopted).
Practice as of 05/2026: banks are already actively building FRIA methodology; BaFin audits expected in 2026.
6. Provider vs. deployer obligations
Short answer: Providers (Article 3(3)) bear the main burden: risk management (Article 9), technical documentation under Annex IV (Article 11), conformity assessment (Article 43) and EU database entry (Article 49). Deployers (Article 3(4)) must review the relevance of input data (Article 26(4)), retain logs for at least 6 months (Article 26(6)), inform employees (Article 26(7)) and, in the case of public bodies or Annex III No. 5 lit. b/c, carry out a FRIA (Article 27).
| Provider (Article 3(3)) | Deployer (Article 3(4)) | |
|---|---|---|
| Risk management (Article 9) | Mandatory | — |
| Technical documentation Annex IV (Article 11) | Mandatory | — |
| Conformity assessment (Article 43) | Mandatory | — |
| EU database entry (Article 49) | Mandatory (public bodies + Annex III No. 1) | Mandatory (No. 1) |
| Input-data relevance review (Article 26(4)) | — | Mandatory |
| Logs (Article 26(6)) | — | Mandatory (≥6 months) |
| FRIA (Article 27) | — | Mandatory (public bodies + No. 5b/c) |
| Employee information (Article 26(7)) | — | Mandatory |
7. Classification checklist
Short answer: A complete high-risk classification comprises nine steps: inventory the AI system, describe the use case and output, identify the Annex III area (1-8), narrowly review Article 6(3), determine the role (provider/deployer), draw up an obligations matrix, carry out a FRIA where required, prepare technical documentation under Annex IV and configure log retention.
- Document the AI system (inventory)
- Describe use case + output
- Identify Annex III area (1-8)
- Review Article 6(3) (interpret narrowly)
- Determine role (provider / deployer)
- Draw up obligations matrix
- Carry out FRIA (where required)
- Prepare technical documentation under Annex IV (where provider)
- Configure log retention (where deployer)
- Employee information (where deployer)
Sources
- Regulation (EU) 2024/1689 (AI Act) — Annex III (EUR-Lex DE) (as of: 02 May 2026)
- AI Act Annex III — 8 high-risk areas (Service Desk)
- AI Act Article 27 — FRIA (applies from 02 August 2026)
- European Commission — Digital Omnibus proposal (as of: 02 May 2026; trilogue ongoing)
- EU AI Office (Commission)
Sources
- Regulation (EU) 2024/1689 — EU AI Act (As of: 2026-05-02)
- AI Act Annex III — High-Risk Areas (As of: 2026-05-02)
- AI Act Article 27 — FRIA (As of: 2026-05-02 application)