AI Recruiting AGG/BAG-Compliant: 8 Safeguards
TL;DR
- Section 22 AGG burden-of-proof reversal: statistically significant disparities in algorithmic selection outputs are sufficient indicia — the employer then bears the full burden of proving non-discrimination
- 120,000 EUR damages awarded to a candidate filtered out by an age-biased AI tool (3 gross monthly salaries)
- 8 safeguards for AGG/BAG-compliant AI recruiting — from bias testing to FRIA
- Human final decision is mandatory (Art. 22(3) GDPR + Art. 14 EU AI Act human oversight)
- FRIA required from Aug 2, 2026 for Annex III, 4 (employment AI) (Digital Omnibus proposal of Nov 19, 2025: postponement to Dec 2, 2027 — trilogue ongoing, NOT adopted)
1. Bias test per protected characteristic
Statistical evaluation across age, gender, ethnicity. The 5% threshold (BAG indication standard) triggers Section 22 AGG burden-of-proof reversal. Repeat the test quarterly with a sample of at least 500 candidates per protected characteristic. Tools: AIF360 (IBM), Aequitas, Fairlearn (Microsoft).
2. Anonymized first selection
Hide name, photo, date of birth, address, and gender markers during initial screening. Reactivate identity only at the final-interview stage. A 2024 University of Bonn study found this reduces Section 22 indication risk by approximately 70%.
3. Human final decision
The AI delivers a recommendation only — a human decides. Required under Art. 22(3) GDPR and Art. 14 EU AI Act (human oversight). The AI score alone may not justify rejection.
4. Transparency for candidates
Privacy notice: "We use AI-supported pre-selection." Right to object: "You may object to a fully automated assessment." Plain language, prominent placement.
5. Reasoning obligation
For each candidate: a written justification of the selection or rejection. The AI score alone is insufficient under both AGG and GDPR Art. 22.
6. FRIA from Aug 2, 2026 (Digital Omnibus proposal of Nov 19, 2025: postponement to Dec 2, 2027 — trilogue ongoing, NOT adopted)
Annex III, 4 EU AI Act requires a Fundamental Rights Impact Assessment (FRIA) before deploying high-risk recruiting AI. Use the 7-step template covering deployment description, frequency, affected categories, fundamental-rights risks, oversight measures, mitigation, and re-evaluation.
7. Complaint mechanism
Clearly communicated complaint channel for candidates. Response within one month (GDPR Art. 12). Document outcomes for audit defense.
8. Re-train and update cycle
On adverse bias findings: retrain or replace the AI model. Document update frequency per provider. Keep test methodology, sample, results, and corrective actions for at least five years.
Summary
Section 22 AGG and Article 26 EU AI Act together turn algorithmic hiring into a real liability risk. The 8 safeguards above — bias testing, anonymized screening, human final decision, transparency, reasoning, FRIA, complaint mechanism, and update cycle — form a defensible package against both AGG damages and EU AI Act fines (up to 15M EUR / 3% global revenue under Art. 99).
Frequently Asked Questions
What does Section 22 AGG concretely mean for my recruiting tool?
Section 22 AGG (German General Equal Treatment Act) only requires applicants to present indications of discrimination. Statistically significant anomalies in algorithmic selection outputs (e.g. systematically higher rejection rates for a protected group) qualify as an indication. Consequence: the employer must furnish full proof of non-discrimination. AGG damages are typically 1–3 gross monthly salaries per claimant under Section 15 AGG; in cases of systematic AI discrimination, the number of claimants can multiply. Practical consequence: every AI recruiting tool must be bias-tested before deployment, with a sample of at least 500 applicants per AGG characteristic.
How do I conduct a bias test correctly?
Six-step methodology: 1) Define the sample — at least 500 applicants per protected characteristic (age, gender, ethnicity). 2) Document the demographic distribution. 3) Capture the AI outputs (score / pass-fail / ranking). 4) Measure statistical disparity using Statistical Parity Difference or Equalized Odds. 5) Apply a 5% threshold — if exceeded: re-train or reject the tool. 6) Documentation: methodology, sample, results, measures — retain for 5 years. Tools: open-source AIF360 (IBM), Aequitas, Fairlearn (Microsoft). Effort: 5-10 person-days initially, then 2-3 person-days per quarter for re-testing.
Which HR AI tool is 'BAG-compliant' out of the box?
As of 05/2026: NONE fully out-of-the-box. Even certified providers (Talent Scout, HrFlow.ai, Eightfold) only provide the capability for bias tests — not the finished test for YOUR data. As the deployer, you must: 1) Review provider bias reports, 2) conduct your own bias test using your data, 3) document a FRIA from 2 August 2026 onwards (Digital Omnibus proposal of 19 November 2025: postponement to 2 December 2027 — not yet adopted). Tools with good bias documentation: Personio Recruiting (Germany, transparent algorithms), HrFlow.ai (France, EU AI Act-compliant self-certification). Tools with risk: older HireVue versions (emotion detection — prohibited under Article 5), Pymetrics games (generational bias documented).
How does anonymized initial screening affect discrimination risk?
Massively positive: anonymized initial screening reduces the Section 22 indication risk by approximately 70% (University of Bonn study 2024). Methodology: prior to the selection phase, the following are removed from applications: photo, name, date of birth, address, gender marker, language accent in cover letters. What remains: qualifications, experience, skills. ATS tools with anonymization: Personio, HRWorks and Workday include this feature from the standard tier. During the interview: anonymization ends — therefore, the following becomes all the more important: structured interview + score sheet + four-eyes principle. Documentation: anchor anonymization as a protective measure in the DPIA.
Sources
- Regulation (EU) 2024/1689 — EU AI Act (Annex III(4), Art. 14, 26, 27, 99) (As of: 2026-05-02)
- General Equal Treatment Act (AGG) — Section 22 (burden of proof) (As of: 2026-05-02)
- Regulation (EU) 2016/679 (GDPR) — Art. 22 automated decisions (As of: 2026-05-02)
- Commission Digital Omnibus (proposal, 19 Nov 2025) (As of: 2026-05-02)