High-Risk AI under Annex III: Classification Workbook

Practitioner note: This article is practice-oriented compliance documentation, not legal advice. We are a compliance specialist, not a law firm. For legally binding advice, please consult a licensed attorney.

TL;DR

  • 8 high-risk areas under Annex III of the EU AI Act
  • HR/recruiting (No. 4) is the most common SME use case — the Section 22 AGG reversal of the burden of proof applies NOW, regardless of the AI Act deadline
  • Credit scoring + insurance scoring (No. 5) — FRIA obligation from 02 August 2026 unchanged
  • Annex III obligations legally binding from 02 August 2026 (Digital Omnibus proposal of 19 November 2025: postponement to 02 December 2027 — trilogue ongoing, not yet adopted)
  • Article 6(3) exceptions to be interpreted narrowly — when in doubt, classify as high-risk

1. 8 high-risk areas under Annex III

Short answer: Annex III of the EU AI Act lists eight high-risk areas: (1) biometric identification and categorisation, (2) critical infrastructure, (3) education and vocational training, (4) employment and personnel management, (5) essential private and public services (e.g. credit scoring), (6) law enforcement, (7) migration/asylum/border control and (8) administration of justice and democratic processes.

No.AreaTypical examples
1Biometric identification + categorisationFacial recognition, voice recognition, biometric classification
2Critical infrastructureTraffic management, water/electricity/gas supply
3Education + vocational trainingAssessment of pupils/students, plagiarism detectors with grading effect
4Employment, personnel managementHR screening, performance management, promotion recommendations
5Essential private + public servicesCredit scoring, insurance scoring, emergency triage
6Law enforcementOffender risk assessment, predictive policing
7Migration, asylum, border controlVisa-assessment algorithms, asylum routing
8Administration of justice + democratic processesCase-law research algorithms, election influencing
Those who do not want to build their own technical documentation under Annex IV will find a risk-classification decision tree, high-risk inventory and FRIA methodology in the EU AI Act Kit.

2. Decision tree for classification

Short answer: Classification as high-risk AI follows six steps: describe the use case, check the Annex III area, where applicable narrowly review the Article 6(3) exception, clarify your role as provider (Article 3(3)) or deployer (Article 3(4)) and document the respective obligations — providers with full Annex IV documentation and conformity assessment, deployers with input-data relevance review, logs and, where applicable, FRIA.

  1. Describe the use case: What exactly does the AI system do? Which data are input, which are output?
  2. Check Annex III: Does the use case fall under one of the 8 areas?
  3. If yes → review Article 6(3) exception: 'specific, narrowly defined preparatory task'?
  4. If no → high-risk confirmed
  5. Clarify role: are you a provider (Article 3(3)) or deployer (Article 3(4))?
  6. Document obligations: provider (full Annex IV documentation, conformity assessment) vs. deployer (input-data relevance, logs, possibly FRIA)

3. Article 6(3) exceptions

Article 6(3) can only be applied narrowly. High-risk classification does not apply if the system EXCLUSIVELY:

IMPORTANT: If the system generates recommendations or assessments that may 'materially influence' human decisions — high-risk despite Article 6(3).

4. Special case HR/recruiting (Annex III No. 4)

Short answer: In the HR area, evaluative AI systems (CV match scoring, performance scoring, workforce planning with promotion/termination recommendations) are high-risk AI under Annex III No. 4. Pure information systems such as HR FAQ chatbots, newsletter personalisation or specific appointment-booking bots do not fall under this (potentially Article 6(3) exception). In addition, the Section 22 AGG reversal of the burden of proof already applies now where there are indications of discrimination.

SystemHigh-risk?Reasoning
CV match scoring toolYesAssessment of applicants
HR chatbot for FAQNoPure information
Performance scoring systemYesEmployee assessment
Workforce planning algorithmYesPromotion/termination recommendations
HR newsletter personalisationNoMarketing context
Appointment-booking bot for applicantsNo (Article 6(3))Specific, narrowly defined task

5. Special case credit/insurance scoring

Annex III No. 5 lit. b (creditworthiness assessment), lit. c (life/health insurance scoring). Obligations apply from 02 August 2026 (FRIA Article 27) — NOT affected by the Digital Omnibus proposal (19 November 2025). Full Annex III obligations also from 02 August 2026 (DO proposal: postponement to 02 December 2027 — not yet adopted).

Practice as of 05/2026: banks are already actively building FRIA methodology; BaFin audits expected in 2026.

6. Provider vs. deployer obligations

Short answer: Providers (Article 3(3)) bear the main burden: risk management (Article 9), technical documentation under Annex IV (Article 11), conformity assessment (Article 43) and EU database entry (Article 49). Deployers (Article 3(4)) must review the relevance of input data (Article 26(4)), retain logs for at least 6 months (Article 26(6)), inform employees (Article 26(7)) and, in the case of public bodies or Annex III No. 5 lit. b/c, carry out a FRIA (Article 27).

Provider (Article 3(3))Deployer (Article 3(4))
Risk management (Article 9)Mandatory
Technical documentation Annex IV (Article 11)Mandatory
Conformity assessment (Article 43)Mandatory
EU database entry (Article 49)Mandatory (public bodies + Annex III No. 1)Mandatory (No. 1)
Input-data relevance review (Article 26(4))Mandatory
Logs (Article 26(6))Mandatory (≥6 months)
FRIA (Article 27)Mandatory (public bodies + No. 5b/c)
Employee information (Article 26(7))Mandatory

7. Classification checklist

Short answer: A complete high-risk classification comprises nine steps: inventory the AI system, describe the use case and output, identify the Annex III area (1-8), narrowly review Article 6(3), determine the role (provider/deployer), draw up an obligations matrix, carry out a FRIA where required, prepare technical documentation under Annex IV and configure log retention.

  1. Document the AI system (inventory)
  2. Describe use case + output
  3. Identify Annex III area (1-8)
  4. Review Article 6(3) (interpret narrowly)
  5. Determine role (provider / deployer)
  6. Draw up obligations matrix
  7. Carry out FRIA (where required)
  8. Prepare technical documentation under Annex IV (where provider)
  9. Configure log retention (where deployer)
  10. Employee information (where deployer)

Sources

As of: 02 May 2026

  1. Regulation (EU) 2024/1689 (AI Act) — Annex III (EUR-Lex DE) (as of: 02 May 2026)
  2. AI Act Annex III — 8 high-risk areas (Service Desk)
  3. AI Act Article 27 — FRIA (applies from 02 August 2026)
  4. European Commission — Digital Omnibus proposal (as of: 02 May 2026; trilogue ongoing)
  5. EU AI Office (Commission)

Sources