Algorithmic Discrimination in Recruiting — Legal Status 2026

Practice note: This article is practice-oriented compliance documentation, not legal advice. We are compliance specialists, not a law firm. For legally binding advice, consult a qualified lawyer admitted in the relevant jurisdiction.

TL;DR

  • Section 22 AGG burden-of-proof reversal: applicants only need to prove indicia of discrimination — the employer then bears the full burden of proving non-discrimination. Algorithmic outputs qualify as indicia.
  • Annex III point 4 EU AI Act (Regulation 2024/1689): AI systems for recruitment, selection and evaluation are classified as high-risk AI from 2 August 2026 (Commission Digital Omnibus proposal of 19 November 2025: postponement to 2 December 2027 — trilogue ongoing, not yet adopted).
  • Article 26 EU AI Act deployer obligations: logging (Art. 26(6) in conjunction with Art. 12), human oversight (Art. 26(2) with Art. 14), information of affected workers (Art. 26(11)), Fundamental Rights Impact Assessment in employment use cases (Art. 27).
  • ECJ C-203/22 (27 February 2025, Dun & Bradstreet): data subjects have a right to information on the logic of automated decisions — a blanket trade-secrets refusal is not sufficient.
  • AGG damages typically 1–3 gross monthly salaries per claimant under Section 15 AGG; amount depends on the individual case. Where a single algorithm systematically discriminates, the number of claimants can multiply.

1. What is algorithmic discrimination?

Algorithmic discrimination in recruiting arises when AI or rule-based selection systems systematically disadvantage applicants — directly or indirectly — on a ground protected by Section 1 AGG (age, gender, ethnic origin, religion/belief, disability, sexual orientation). Typical sources:

The employer is liable as deployer under the EU AI Act regardless of whether the model was trained in-house or sourced from a provider.

2. Section 22 AGG burden-of-proof reversal applied to algorithms

Section 22 AGG: "If, in a dispute, one party proves indicia which suggest a disadvantage on grounds named in Section 1, the other party bears the burden of proving that no breach of the anti-discrimination provisions occurred."

For algorithmic decisions this means:

  1. Low indicia threshold: statistical irregularities in selection rates, documented bias findings, or a purely algorithmic rejection without human review can already constitute indicia.
  2. High counter-proof threshold: the employer must positively show that the decision was non-discriminatory. Where the algorithm's logic cannot be audited, this proof cannot be furnished.

The compliance imperative: anyone deploying AI in recruiting must keep it auditable — logs, bias tests, per-applicant selection documentation, documented human oversight.

For a structured anti-discrimination toolkit (job-ad scanner, complaint workflow, training, pay reporting), see our Anti-Discrimination Kit.

3. EU AI Act: HR recruiting as high-risk AI (Annex III point 4)

Annex III point 4 of Regulation (EU) 2024/1689 lists as high-risk:

"Employment, workers management and access to self-employment: (a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates; (b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of persons in such relationships."

High-risk application becomes directly applicable on 2 August 2026 (Art. 113(b) AI Act). The Commission's Digital Omnibus proposal of 19 November 2025 envisages postponing Annex III to 2 December 2027 — status 2 May 2026: trilogue ongoing, not yet adopted. Until formal adoption, 2 August 2026 remains the legally binding date.

Once high-risk applies, deployers (employers) must among other things:

4. ECJ C-203/22 (Dun & Bradstreet, 27 February 2025) — right to information on the logic

On 27 February 2025 the ECJ clarified the scope of the right to information under Article 15(1)(h) GDPR for automated individual decisions within the meaning of Article 22 GDPR: data subjects are entitled to "meaningful information about the logic involved" — sufficiently meaningful to enable the decision to be understood and challenged. A blanket invocation of trade secrets is not enough; where there is a conflict, the information is to be disclosed to a competent authority or court, which then balances secrecy and access.

Consequence for recruiting: where AI-driven pre-selection is used, an employer must be able to explain the central logic of the decision to a rejected applicant on request. Vendor contracts without audit/disclosure rights are a compliance risk.

5. Reality check on damages

Section 15 AGG provides two heads of claim:

In BAG case-law, awards typically sit at 1–3 gross monthly salaries per claimant. Higher amounts are conceivable in individual cases but require concrete justification (severity, absence of compliance measures, repetition).

The multiplication effect under systematic algorithmic discrimination matters: a single biased model running for months across an applicant cohort can generate parallel proceedings. Out-of-court settlements (works council, union, supervisory authority) are also more likely in this constellation.

Add — depending on the facts — GDPR damages under Art. 82 GDPR, EU AI Act fines under Art. 99 (deployer breaches up to EUR 15 m or 3 % of worldwide annual turnover, applicable from 2 August 2026), GDPR fines under Art. 83, and reputational/follow-on costs.

6. BAG trend: BAG 8 AZR 300/24 (23 October 2025, Daimler Truck)

BAG 8 AZR 300/24 is an equal-pay decision, not an algorithm decision — but it signals a case-law trend that matters for algorithmic discrimination: the BAG held that a single pair-wise comparison with one highest-paid male colleague is enough to trigger the discrimination presumption under Section 22 AGG / Section 7 EntgTranspG. A median comparison is not required.

Transferability to algorithmic discrimination is to be expected as a trend (not a 1:1 citation): if a single comparator already suffices as indicia, then a documented bias finding from a selection algorithm should a fortiori suffice as indicia under Section 22 AGG. The indicia threshold is being lowered — increasing the compliance pressure to audit and document proactively.

7. What you must do — compliance path

StepContentDeadline / trigger
HR AI inventoryFull list of all AI/algorithmic components in the recruiting stack (job-ad targeting, ATS filters, pre-screening, video analysis, skill matching).Now
Risk classificationPer system: Annex III check (point 4 employment); high-risk vs. non-high-risk.Now
Bias auditStatistical analysis of past algorithmic decisions by protected attribute; documented mitigation.Before go-live; annually thereafter
Human oversightEvery algorithmic pre-decision must be reviewable by a trained person before final rejection.Now (AGG); duty from 2 Aug 2026 (Art. 14, 26 AI Act)
LoggingSelection logs > 6 months, per applicant.Now (recommended); duty from 2 Aug 2026 (Art. 12, 26 AI Act)
Transparency / right to informationApplicant notice on AI use, logic disclosure on request (Art. 22, 15 GDPR; ECJ C-203/22).Now
FRIAFundamental Rights Impact Assessment under Art. 27 AI Act.Before high-risk deployment
Vendor contractAudit rights, logic disclosure, bias-test obligations, liability allocation.On contract / renewal
TrainingHR staff in algorithmic-discrimination detection and Art. 4 AI Act (AI literacy, in force since 2 Feb 2025).Ongoing

Sources

  1. German General Equal Treatment Act (AGG), Sections 1, 3, 7, 15, 22
  2. Regulation (EU) 2024/1689 (AI Act), Annex III point 4, Articles 9–15, 26, 27, 50, 99, 113
  3. ECJ, judgment of 27 February 2025, C-203/22 (Dun & Bradstreet Austria) — right to information on the logic of automated decisions under Art. 15(1)(h) GDPR
  4. BAG, judgment of 23 October 2025 — 8 AZR 300/24 (pair-wise comparison equal pay, Daimler Truck) as example of lowered indicia threshold
  5. European Commission, Digital Omnibus on AI, proposal of 19 November 2025 (trilogue ongoing, not adopted as of 2 May 2026)
  6. German Federal Data Protection Act (BDSG), Section 26 (employee data protection)