GPAI (General Purpose AI)

Generally accessible AI models under Articles 53–55 EU AI Act

Practitioner's note: This article is practice-oriented compliance documentation, not legal advice. We are a compliance specialist, not a law firm. For legally binding information please consult a licensed lawyer.

TL;DR

General Purpose AI (GPAI) within the meaning of Article 3(63) EU AI Act refers to AI models that, by virtue of their significant general capabilities, can be used for a wide range of tasks. Examples: ChatGPT, Claude, Gemini, LLaMA. Provider obligations under Articles 53–55 have applied since 02 August 2025. Where systemic risk is present (≥10²⁵ FLOPs of training compute): additional obligations under Article 55.

What is GPAI (General Purpose AI)?

GPAI models are subject to three pillars of obligations under the GPAI Code of Practice (finalised 10 July 2025):

Threshold for 'systemic risk' (Article 51): ≥10²⁵ FLOPs of cumulative training compute. Currently (2026), this only captures frontier models (GPT-4-class, Claude Opus, Gemini Ultra).

Practical example

Practical consequences for SMEs acting as deployers of GPAI: - Using ChatGPT: you are a deployer, not a provider — no direct GPAI obligations - However: AI literacy under Article 4 (mandatory since 02/2025) - For Annex III use cases (HR, recruiting): additional high-risk obligations from 12/2027 Those who build their own RAG systems on top of a GPAI API and distribute them under their own name: substantial modification → become a provider.

Frequently asked questions

Am I a GPAI provider if I integrate ChatGPT into our software?
No, provided that you retain the OpenAI brand and use the model unchanged. You remain a deployer. In the case of re-branding or substantial modification, you may become a provider.
What is the GPAI Code of Practice?
A voluntary compliance framework recognised by the European Commission. Joining creates a presumption of conformity for Articles 53–55. Finalised on 10 July 2025 by the AI Office.
What obligations do I have as a GPAI user?
As a deployer: AI literacy + prohibited-practices screening + from 12/2027, for Annex III, the high-risk deployer obligations (input-data relevance review, logs, FRIA where applicable).

See also