Real-World Testing (Article 60)
Testing in real-world conditions outside a sandbox
Practitioner's note: This article is practice-oriented compliance documentation, not legal advice. We are a compliance specialist, not a law firm. For legally binding information please consult a licensed lawyer.
TL;DR
Article 60 EU AI Act permits real-world testing of high-risk AI systems outside regulatory sandboxes — subject to conditions including notification of the supervisory authority, risk management, and protection of data subjects.
What is Real-World Testing under Article 60?
Real-world testing requirements:
- Written consent of the competent supervisory authority
- Risk management system (Article 9)
- Data protection safeguards for data subjects
- A stop mechanism in the event of critical incidents
- Documentation of all test data and results
- Maximum duration: 6 months, extendable once by a further 6 months
Practical example
An HR tool provider tests a new candidate scoring model with 5 customers in a live recruiting workflow. Real-world testing notification to the BfDI and the market surveillance authority. The stop mechanism is engaged where bias readings exceed 5%.
Frequently asked questions
Why not a sandbox?
A sandbox provides neither real data nor real users. For high-risk AI involving candidate profiles, a sandbox is unrealistic.
Fine for breach?
Article 99 — up to EUR 35 million / 7% of global turnover.
Who can carry out real-world testing?
Providers AND deployers, each with their own notification.