Transparency Obligations under Art. 50 EU AI Act: Chatbots, Deepfakes, AI Content

Practitioner note: This is not legal advice. For specific situations, consult a qualified attorney or compliance officer.

TL;DR

  • Art. 50 in force from Aug 2, 2026 — not affected by the Digital Omnibus proposal (Nov 19, 2025; trilogue ongoing, NOT adopted)
  • Chatbot disclosure: "You are chatting with an AI" at the start of the conversation
  • Deepfake labeling: realistic synthetic persons or scenes must be clearly marked
  • Watermarking: Art. 50(2) requires synthetic content to be machine-readable as artificial (C2PA standard dominates)
  • Fine risk: up to 15M EUR / 3% global revenue

1. What Art. 50 requires

Art. 50 covers transparency for four categories of AI content: chatbots, emotion-recognition / biometric-categorization systems, deepfakes (synthetic audio/video/image), and AI-generated text on matters of public interest. Each category has its own disclosure rule.

2. Chatbot labeling

Minimum requirement: "You are speaking with an AI / a bot. We use AI to provide faster answers." Position: first message of the bot, visible in the UI. Exception under Art. 50(1): if the AI nature is "obvious" from context (e.g., recognizably synthetic voice). In practice, always disclose explicitly — the exception is narrow.

3. Deepfake labeling

A deepfake is artificially generated or manipulated content that resembles a real person, object, or event in a deceptive manner. Obligations: visible "Created with AI" indication or equivalent, in or on the content (not hidden). For video: indication at the start or watermark. For images: visual marker or caption. Artistic and satirical exception: discreet but still recognizable labeling.

4. Watermarking under Art. 50(2)

Synthetic audio, video, image, and text content must be machine-readable as artificial. 2026 standards: C2PA (Coalition for Content Provenance and Authenticity) is dominant; ISO/IEC 22376 emerging. Implement at the generation pipeline level so downstream platforms can detect AI provenance automatically.

5. Exceptions and hardship clause

6. Practical checklist

Summary

Art. 50 is a small set of rules with broad reach: every chatbot, every AI-generated image, every deepfake. It enters force Aug 2, 2026 and is not affected by the Digital Omnibus proposal (Nov 19, 2025; trilogue ongoing, NOT adopted). Implement disclosures and C2PA watermarking now — this is the most visible compliance signal to customers and supervisors.

View EU AI Act Kit →

Frequently Asked Questions

Do we have to label our chatbot?
Yes, from 02 August 2026. Art. 50(1): the user must be informed that they are interacting with AI. A notice such as 'You are chatting with an AI' at the start of the conversation is sufficient.
Do AI-generated marketing images have to be labeled?
Yes, if they depict an 'artificial person' (deepfake-like) or a 'realistic scene'. For purely illustrative AI images without persons: not mandatory, but recommended.
What must the labeling contain?
Clearly recognizable wording such as 'Generated with AI' or a similar notice. Position: visible on the image/text, not hidden.
Which watermarking standards exist?
C2PA Content Credentials (Adobe/Microsoft), SynthID (Google DeepMind), Merkle-tree methods (OpenAI). As of 04/2026: C2PA is dominant.
Exception for artistic freedom?
Art. 50(4): for artistic/satirical/fictional use, the labeling may be discreet, but cannot be omitted.
Fines for violations?
Art. 99(4)(b): up to EUR 15 million or 3% of global revenue for Art. 50 violations. In practice: supervisory authority fines will likely start at around EUR 5,000-50,000.

Sources