FRIA Template for Annex III, 4 (Employment AI)

Practitioner note: This is not legal advice. For specific situations, consult a qualified attorney or compliance officer.

TL;DR

  • FRIA = Fundamental Rights Impact Assessment under Art. 27 EU AI Act
  • Mandatory for Deployers of high-risk AI in Annex III, 4 (employment) by Aug 2, 2026 (Digital Omnibus proposal of Nov 19, 2025: postponement to Dec 2, 2027 — trilogue ongoing, NOT adopted)
  • 7 sections: deployment description, frequency, affected categories, fundamental-rights risks, oversight, mitigation, re-evaluation
  • Re-evaluation required quarterly and on any material model or use-case change
  • Common risks: CFR Art. 21 (non-discrimination), Art. 8 (data protection), Art. 31 (fair working conditions)

1. Description of the deployment

Specify the use case (recruiting, performance evaluation, promotion ranking), Provider, model version, integration with existing HR systems, and deployment frequency. Reference the AI inventory entry.

2. Time period and frequency

Pilot vs. production status, estimated number of use cases per month, expected lifecycle. Include planned model updates and re-training cadence.

3. Affected categories of natural persons

Candidates, employees, supervisors. Numbers per year. Identify vulnerable groups: severely disabled persons, pregnant employees, older workers, employees on parental leave. Document estimated counts per group.

4. Specific fundamental-rights risks

Charter of Fundamental Rights references: Art. 21 (non-discrimination), Art. 8 (data protection), Art. 31 (fair working conditions), Art. 30 (protection from unjustified dismissal). Per risk: probability of occurrence and severity of harm. Use a 3x3 or 5x5 risk matrix.

5. Human oversight measures

Who reviews each output? Reviewer training and authority to override? Escalation mechanism for adverse decisions? Document reviewer-to-AI ratios and review-time budget per case.

6. Risk mitigation measures

Bias tests with frequency (e.g., quarterly per protected characteristic), anonymized first-stage screening, candidate complaint mechanism, transparency notices, retraining triggers. Tie each measure to a specific risk identified in section 4.

7. Re-evaluation

Quarterly re-evaluation. Trigger a new FRIA on any material change to the model, intended purpose, or scope of affected persons. Archive prior versions for at least five years.

Summary

Annex III, 4 employment-AI Deployers must produce a FRIA before going live. The 7-section structure above maps directly onto Art. 27(1) requirements. Combined with bias testing (mandatory under Section 22 AGG and Article 26 EU AI Act), anonymized screening, and the mandatory human final decision, FRIA forms the audit-defensible package for HR AI by Aug 2, 2026 (Digital Omnibus proposal of Nov 19, 2025: postponement to Dec 2, 2027 — trilogue ongoing, NOT adopted).

View EU AI Act Kit →

Frequently Asked Questions

Mandatory from when?
2 August 2026 for all high-risk AI in the employment domain (Digital Omnibus proposal of 19 November 2025: postponement to 2 December 2027 — trilogue ongoing, not yet adopted).
Who performs it?
The deployer (= you as the employer). External consultancy is recommended.
Retention?
As long as the AI is in use + 5 years thereafter.

Sources