EU AI Act Article 50 — Transparency & Watermarking
Article 50 is the EU AI Act provision that forces every consumer-facing AI product into provable transparency. Effective 2 August 2026. €15M / 3% turnover fines. The Code of Practice on AI-generated content marking — final draft expected June 2026 — adds a two-layer technical specification most current tools do not meet.
What the article actually says
Article 50(1) — natural-person interaction disclosure
Providers of AI systems intended to interact directly with natural persons (chatbots, voice agents, virtual assistants) must design + develop them so users are informed they are interacting with an AI system, unless this is obvious to a reasonably well-informed, observant, and circumspect person.
Article 50(2) — synthetic content marking
Providers of AI systems generating synthetic audio, image, video or text content must ensure outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. Technical solutions must be effective, interoperable, robust, and reliable to the extent technically feasible.
Article 50(4) — deployer disclosure for deepfakes + public-interest text
Deployers of AI systems that produce or manipulate image/audio/video constituting a deep fake must visibly disclose the content has been artificially generated or manipulated. Deployers using AI-generated text on matters of public interest must disclose unless the output has undergone human review or editorial control with editorial responsibility.
Common gaps — how teams fail Article 50 audit
- Single-layer C2PA only. Code of Practice draft requires two layers. Most current "Content Credentials" implementations fail.
- No watermark survival testing. If your watermark dies on a screenshot, JPEG re-save, or platform re-encode, it doesn't satisfy "robust + reliable to the extent technically feasible."
- Deployer disclosure missing on deepfakes. Article 50(4) is about visible disclosure — not just metadata. Forgetting the visible "AI-generated" badge on user-facing surfaces is a common audit fail.
- No audit trail for the marking decision. Auditor will ask: "show me the decision log for which outputs were marked vs not." If you don't have it, you fail Article 12 record-keeping at the same time.
- Text watermarking deferred. Article 50(2) explicitly covers text. SynthID-style invisible watermarks for text are technically feasible — "we don't watermark text" is no longer a defensible answer.
- No fallback fingerprinting. Code of Practice says fingerprinting is the documented fallback. No fingerprint DB = no fallback evidence.
How MEOK covers Article 50
- meok-watermark-attest-mcp — C2PA manifest signing + SynthID-class invisible watermark + perceptual fingerprint. All three layers, signed cert per batch.
- /article-50-kit (£999) — turnkey 7-day deployment of all three layers in your GenAI pipeline. Includes the signed conformity attestation.
- /audit-prep-bundle (£4,950) — Article 50 + Articles 9/10/14/26 wrapped in a 14-day engagement with full evidence pack.
- Verification — every signed cert has a public verify URL at meok-attestation-api.vercel.app/verify your auditor can curl independently.
Frequently asked
When does EU AI Act Article 50 apply?
Article 50 transparency obligations apply from 2 August 2026 (24 months after entry into force on 1 August 2024). Providers of AI systems generating synthetic content must mark outputs as machine-readable. Deployers must visibly disclose AI generation to people exposed to it.
What does 'machine-readable marking' actually mean?
Per the second draft of the EU Code of Practice on marking AI-generated content (3 March 2026), machine-readable marking means a two-layer approach: secured C2PA Content Credentials metadata embedded in the file PLUS an imperceptible watermark in the content itself. Fingerprinting/logging is the documented fallback when neither layer survives.
Does C2PA Content Credentials alone satisfy Article 50?
No. The Code of Practice draft explicitly says single-layer C2PA metadata is not sufficient because it strips on screenshot, recompression, or platform re-encoding. The two-layer requirement (C2PA + invisible watermark) is what survives common content workflows.
What about text outputs?
Article 50(2) covers text-generating AI. The Code of Practice draft prefers imperceptible watermarking for text where technically feasible (e.g. SynthID-class watermarks at the model output layer). Where watermarking text is not feasible, the deployer disclosure obligation in Article 50(4) alone applies — the visible 'this is AI-generated' notice.
Who's exempt from Article 50?
Article 50(2) exempts AI systems performing 'an assistive function for standard editing or which do not substantially alter the input data' (typewriter mode). Article 50(4) exempts where AI use is 'authorised by law to detect, prevent, investigate or prosecute criminal offences'. Both exemptions are narrow — most consumer-facing GenAI products are NOT exempt.
What's the maximum fine for Article 50 non-compliance?
Article 99(4) caps the fine at €15 million or 3% of global annual turnover, whichever is higher. Per-infringement basis — multiple non-compliant outputs can compound. Use our /fine-calculator to model your exposure.
How long does it take to implement Article 50 compliance?
MEOK's Article 50 Watermarking Kit (£999) ships in 7 days and includes: C2PA manifest signing with DigiCert cert, SynthID-class invisible watermark embedding, perceptual fingerprint database, signed Article 50 conformity attestation. The Audit-Prep Bundle (£4,950) wraps Article 50 plus Articles 9/10/14/26 in a 14-day engagement.
Will the watermark survive screenshots, recompression, or light editing?
Yes for SynthID-class invisible watermarks at the model output layer (~98% survival on common edits). C2PA metadata strips on screenshot but the perceptual fingerprint database lets you re-identify the artefact later. The three-layer redundancy (C2PA + watermark + fingerprint) is exactly why single-layer tools fail audit.
2 Aug 2026 is the only real EU AI Act cliff left
The Digital Omnibus delayed Annex III high-risk to Dec 2027. Article 50 wasn't delayed.
Sources: EU AI Act Regulation (EU) 2024/1689 · Second draft Code of Practice on marking AI-generated content (3 March 2026)
MEOK AI Labs · CSOAI LTD · UK Companies House 16939677