ChatGPT in the Enterprise: AUP Template + 8 Prohibition Clauses

Practitioner note: This is not legal advice. For specific situations, consult a qualified attorney or compliance officer.

TL;DR

  • Acceptable Use Policy (AUP) with 8 mandatory prohibition clauses for GPAI tools (ChatGPT, Copilot, Claude, Gemini)
  • No personal data in prompts — pseudonymize or use generic examples; violations equal a data breach under Art. 33 GDPR
  • Tenanted accounts only: personal ChatGPT accounts banned, only enterprise tenants with EU data boundary
  • Training plus quiz plus signed AUP required before access — track completion
  • Quarterly audit of tenant logs and annual AUP review

1. Prohibition: Personal Data

No real names, addresses, dates of birth, health records or other personal data in prompts. Use pseudonymization or generic examples. A violation is a data breach under Art. 33 GDPR and triggers the 72h notification clock.

2. Prohibition: Trade Secrets

No patents, strategies, confidential contracts, internal financials or unreleased product specifications in prompts. Even with enterprise tenants, the residual leakage risk is non-zero.

3. Prohibition: Unverified Adoption

AI output must always be reviewed before use. For factual claims and citations, require a second-source verification. Document the verification step in the work product.

4. Prohibition: AI for High-Risk Decisions

No automated decisions in HR, credit, insurance or similar high-risk areas without human review. EU AI Act Annex III categories trigger Art. 14 human oversight obligations.

5. Mandatory External Transparency

AI-generated content delivered to customers must be clearly labeled (Art. 50 EU AI Act). Internal use does not require labeling but should be tracked.

6. Tenanted Accounts Only

Personal ChatGPT accounts are prohibited. Only enterprise tenants (ChatGPT Enterprise, Copilot for Microsoft 365, Claude for Work, Gemini Enterprise) with documented data boundary and DPA. Block consumer endpoints at the firewall.

7. Prohibition: Manipulative Use

No deepfakes, manipulated sources, phishing templates, or content designed to deceive. Such use violates the AUP and likely external law (German Criminal Code, copyright, defamation).

8. Mandatory Training and Quiz

Before access, employees must complete modules 4-6 of the AI Literacy curriculum (Art. 4 EU AI Act) and pass a 70% quiz. Track completion in the LMS.

Summary

An Acceptable Use Policy is the cheapest, fastest control for GPAI tools and a baseline expectation under Art. 4 EU AI Act (effective 02/02/2025). Combine the eight prohibition clauses with a data classification reminder, monthly tenant audit logs, and an annual AUP review. Personal-data leakage in prompts is the most common AUP violation and a notifiable breach — train accordingly.

View EU AI Act Kit →

Frequently Asked Questions

What if an employee enters personal data?
Data breach under Article 33 GDPR. Report immediately and log the AUP violation.
Is training alone sufficient?
Training plus a quiz plus signed acknowledgment of the AUP.
How do you verify compliance?
Tenant audit logs plus annual AUP reviews.

Sources