High-impact / systemic risk (AI Regulation)

Article 51 EU AI Act — high-performance GPAI classification

Practitioner's note: This article is practice-oriented compliance documentation, not legal advice. We are a compliance specialist, not a law firm. For legally binding information please consult a licensed lawyer.

TL;DR

The systemic risk classification (Article 51 EU AI Act) applies to GPAI models of particularly high capability. Threshold: cumulative training compute operations >10^25 FLOPs.

What is high-impact / systemic risk (AI Regulation)?

As of April 2026: approximately 25 systemic-risk models worldwide. Examples: GPT-4o, Claude 3 Opus / 3.5 Sonnet, Gemini 1.5 / 2 Pro, Llama 3.1 405B, Mistral Large 2.

Practical example

OpenAI's GPT-4, trained with around 10^26 FLOPs: systemic risk. Obligations: adversarial testing, risk assessment, and AI Office reporting.

Frequently asked questions

Who decides?
The EU AI Office, based on provider information and a technical assessment.
Does this affect SMEs?
Almost never. Only providers of such models are affected (OpenAI, Anthropic, etc.).

See also