Human Oversight (Article 14 EU AI Act)

Human oversight for high-risk AI

Practitioner's note: This article is practice-oriented compliance documentation, not legal advice. We are a compliance specialist, not a law firm. For legally binding information please consult a licensed lawyer.

TL;DR

Human Oversight under Article 14 EU AI Act is the obligation to design high-risk AI systems in such a way that natural persons can effectively oversee them. Provider obligation (design): stop button, control mechanisms, clear outputs. Deployer obligation: persons with competence, training and authority (Article 26(2)).

What is Human Oversight (Article 14 EU AI Act)?

Providers must build in human oversight mechanisms (Article 14(4)):

Deployers must designate persons with competence, training and authority (Article 26(2)). Section 22 of the Anti-Discrimination Act (AGG) reversal of burden of proof in combination with Articles 14 and 26 EU AI Act: a lack of human oversight establishes AGG liability — even before Annex III applies.

Practical example

Practical requirements for an HR recruiting tool: - HR staff trained in algorithmic decision-making - Override option for every automatic rejection - Bias indicator display - Audit trail of all decisions

Frequently asked questions

Is a 'stop button' sufficient?
No, that is the minimum baseline. Human oversight requires UNDERSTANDING + the ABILITY TO INTERVENE + the avoidance of automation bias.
What is automation bias?
The tendency to follow automated recommendations uncritically. The human relies on the algorithm without questioning it — Article 14(4)(b) requires awareness of this.
Do all employees have to understand HR AI?
No, only those with an oversight role. In practice: a dedicated 'AI oversight role' in the HR team.

See also