TheAlgoBrief

AI Ad Disclosure Laws by Country: EU, US, South Korea 2026

12 min readThe AlgoBrief
AI advertising disclosure laws by countryEU AI Act advertisingAI ad disclosure requiredSouth Korea AI disclosureCalifornia SB 942 AI advertisingNY Synthetic Performer LawFTC AI endorsement guidelines
The Short Answer

What countries require AI disclosure in advertising? As of 2026, five major jurisdictions mandate AI disclosure in advertising: the European Union (EU AI Act), the United States (FTC Endorsement Guidelines), New York State (Synthetic Performer Law, effective June 2026), California (SB 942, effective August 2026), and South Korea (Telecoms Business Act, effective early 2026). Requirements range from machine-readable provenance metadata and on-screen labels to full performer consent documentation.

Table of Contents:

What Countries Require AI Disclosure in Advertising in 2026?

If you are wondering what countries require AI disclosure in advertising, the list is growing rapidly in 2026. Until recently, "AI advertising disclosure" was a theoretical concern. Today, it is an operational requirement. The wave of legislation that compliance teams have been tracking is now breaking onshore — South Korea's rules are live, New York's clock is counting down to June, and California's SB 942 follows two months later. Meanwhile, the EU AI Act is building to full enforcement in 2027, and the US Federal Trade Commission has already used its existing deceptive-practices authority to pursue undisclosed AI content.

For global agencies managing multi-market campaigns, the risk is not that any one law is hard to follow in isolation — it is that the five frameworks are meaningfully different from each other. A disclosure label that satisfies California's human-readable requirement may still fail South Korea's placement rules. A consent workflow designed for New York's synthetic performer standard will not cover every touchpoint the EU cares about.

This complete 2026 guide explains what each framework requires, compares them side-by-side, and maps how the IAB AI Transparency and Disclosure Framework fits on top of all five.

Enforcement Timeline at a Glance

For agencies distributing ad creative globally, here is the immediate enforcement timeline:

Jurisdiction Deep-Dives

🇪🇺 European Union — EU AI Act (Phased Rollout)

Key Definition: The EU AI Act is a comprehensive regulatory framework that governs the sale and use of artificial intelligence in the EU, mandating that providers and operators of AI systems label synthetic media so consumers know it is machine-generated.

Core obligation: Article 50 of the EU AI Act requires providers of AI systems that generate synthetic media — including images, video, audio, and text that a "reasonable person" would mistake for authentic human-created content — to ensure that output is marked as AI-generated in a machine-readable format. Operators (i.e., agencies and advertisers deploying those systems) must make that information available to end users in a "clear and distinguishable manner."

Advertising-specific trigger: Any AI-generated or AI-altered image, video, or audio in a paid commercial context is covered. AI-assisted copy (where a human substantially edits AI output) sits in a grey zone, but the IAB Framework's materiality test provides a useful bright-line guide for practical compliance decisions.

Enforcement timeline: GPAI model transparency requirements took effect in August 2025. Obligations for deployers — the agencies and brands configuring AI for their campaigns — take full effect in August 2026. Content labelling for all synthetic media reaches full force in 2027.

Penalty exposure: Fine: Up to €15M or 3% global revenue for violations by operators, whichever is higher. National competent authorities in each member state enforce, meaning enforcement intensity will vary across the 27-country bloc in early years.

Key nuance: The EU Act includes a narrow exception for creative, satirical, or parody content — but it still requires an "appropriate disclosure" even in those cases, unless the disclosure would "interfere with the display or enjoyment" of the content. Do not read this as a blanket art-direction exemption.

🇺🇸 United States (Federal) — FTC Guidelines (Enforceable Now)

Key Definition: The FTC Endorsement Guidelines are federal rules enforcing truth-in-advertising that classify undisclosed or deceptive AI-generated endorsements, reviews, or testimonials as unfair or deceptive acts under Section 5 of the FTC Act.

Core obligation: The FTC does not have a single "AI disclosure law." Instead, it applies its long-standing Section 5 authority over unfair or deceptive acts and practices, combined with its 2023 revised Guides Concerning the Use of Endorsements and Testimonials, to AI-generated content. The FTC's position is clear: if AI is used to generate or fabricate an endorsement, review, or testimonial that a consumer might believe is from a real person, that is a deceptive practice. Disclosure is required.

Advertising-specific trigger: The trigger is consumer deception, not AI use per se. A visually obvious AI illustration does not require a label under FTC theory; a photorealistic AI spokesperson who appears to be a real customer does. The IAB's "materiality" test — would a reasonable consumer act differently if they knew this content was AI-generated? — closely mirrors FTC logic.

Enforcement timeline: No future date to wait for. The FTC has issued warning letters and initiated investigations into AI-generated testimonials. The agency is actively watching.

Penalty exposure: Fine: Up to $51,744/day per violation for knowing violations of FTC rules. More practically, consent decrees and remediation orders can impose sweeping operational changes on advertisers.

Key nuance: The FTC's guidance explicitly calls out negative reviews suppressed by AI as a deceptive practice — disclosure is not just about labelling generated content, it's also about ensuring AI curation doesn't create a misleading picture of consumer sentiment.

🗽 New York State — Synthetic Performer Law (Effective June 2026)

Key Definition: The NY Synthetic Performer Law is a state civil rights statute that prohibits the commercial use of a living or deceased person's voice, face, or performance replicated by AI without their explicit written consent.

Core obligation: New York's statute targets a specific and high-value use case: the AI-generated replication of a real person's voice, face, likeness, or performance in a commercial advertisement. Any advertiser or agency doing this without explicit written consent from the living individual — or from the estate of a deceased performer — is in violation. On-screen or audible disclosure is also required.

Advertising-specific trigger: The law is narrowly scoped to commercial advertising. It does not apply to news, editorial, satire, or political speech. If your ad features an AI-generated voice that sounds like a recognisable celebrity, a reanimated historical figure, or a synthetic replica of a real person — and it runs in New York — you need consent documentation before June 2026.

Enforcement timeline: Effective June 2026. New York's Attorney General has enforcement authority, and private right of action is available to affected individuals or their estates.

Penalty exposure: Fine: Up to $1,000 to $5,000 per violation. Given that a single national ad campaign can generate thousands of "impressions" in New York, this can compound rapidly. Class actions are a real risk given the private right of action.

Key nuance: The consent requirement extends to deceased performers for 40 years post-death, and the estate right is descendible. Agencies repurposing archival audio or building AI voice models from licensed recordings of deceased artists need to verify estate consent is in scope for the NY statute specifically.

🌴 California — SB 942 AI Transparency Act (Effective Aug 1, 2026)

Key Definition: California SB 942 (The AI Transparency Act) is a state-level AI regulation that requires large AI providers to embed detectable provenance signals (C2PA) in AI-generated content and offer a free AI detection tool to users.

Core obligation: California SB 942 targets the AI providers (the companies making the tools) more than the advertisers themselves — but its downstream effects are significant for ad teams. Any covered AI provider must: (1) embed detectable watermarks or provenance metadata in AI-generated content, (2) offer a free tool allowing users to detect AI-generated content, and (3) retain provenance records for five years. Advertisers distributing AI creative through covered platforms inherit the labelling obligations.

Advertising-specific trigger: Any AI-generated image, video, or audio distributed commercially in California through a covered platform. "Covered platform" means a provider deploying a generative AI system with over one million monthly California users — which includes virtually every major ad-tech platform.

Enforcement timeline: August 1, 2026. The California Attorney General enforces.

Penalty exposure: Fine: Up to $5,000 per unlabelled piece of content. Because SB 942 also mandates machine-readable provenance (C2PA-compatible), the penalty clock starts when content is distributed without valid metadata — not just when a consumer complains.

Key nuance: SB 942 is the only US state law that explicitly requires machine-readable provenance, not just a human-readable label. This makes it the US law most technically aligned with the IAB Framework's C2PA assertion requirements. If you build a C2PA-compliant workflow for California, you have a head start on IAB compliance for every other market.

🇰🇷 South Korea — Telecoms Business Act (Live Now)

Key Definition: South Korea’s Telecoms Business Act is a national telecommunications framework that mandates broadcasters and platform operators to attach visible disclosure labels to AI-generated or AI-altered advertising content throughout its entire duration.

Core obligation: South Korea moved faster than any Western regulator. Amendments to the Telecoms Business Act — effective early 2026 — require broadcasters, platform operators, and digital service providers to attach clear, visible disclosure labels to AI-generated or AI-materially-altered advertising content. The disclosure must appear throughout the duration of the ad, not only at the beginning.

Advertising-specific trigger: All AI-generated or AI-altered commercial content distributed via broadcast or digital channels in South Korea. Platform operators (not just advertisers) carry the compliance duty, which means demand-side partners serving Korean inventory are already in scope.

Enforcement timeline: Already in effect. The Korea Communications Commission (KCC) oversees enforcement, and fines have already been issued to broadcasters in test cases.

Penalty exposure: Fine: Up to KRW 30M (~$22K USD) per violation, with potential licence suspension for persistent violations. The platform-operator liability model means media agencies buying South Korean inventory face indirect exposure if their supply-side partners are fined and seek indemnification.

Key nuance: South Korea's requirement that the disclosure label remain visible throughout the entire ad — not just in a brief end-card disclaimer — is stricter than any US rule and stricter than current EU guidance. If your global creative template uses a 3-second end-card disclosure, it will fail the Korean standard.

Side-by-Side Comparison: All Five Jurisdictions

The table below is designed for structured comparison. Use this alongside our compliance check tools designed for ad-tech teams running cross-border campaigns.

Variable🇪🇺 EU AI Act🇺🇸 FTC (Federal)🗽 New York🌴 California SB 942🇰🇷 South Korea
StatusPhasedLiveJun 2026Aug 2026Live
TriggerSynthetic media mistaken for realDeceptive endorsementAI replication of real person in adAI content on covered platformAI-generated ad content
Human-readable label✅ Yes✅ Yes (contextual)✅ Yes (+ consent)✅ Yes✅ Yes — throughout ad
Machine-readable / C2PA✅ Yes❌ Not specified❌ Not specified✅ Explicit C2PA⚠️ Not specified
Consent requirement?❌ No❌ No✅ Written consent (living & dead)❌ No❌ No
Who is liable?Providers + OperatorsAdvertiser / agencyAdvertiser / agencyAI providers + distributorsPlatforms + broadcasters
Max penalty€15M or 3% revenue$51,744/day per violation$5,000 per violation$5,000 per pieceKRW 30M (~$22K USD)
Private right of action?❌ Regulator only❌ Regulator only✅ Yes✅ Yes (civil & AG)❌ Regulator only
IAB Framework alignmentHigh — Maps to Tier 2Moderate — Materiality testPartial — Consent is externalVery High — C2PA alignsModerate — Label standard

Note: This table reflects the best available interpretation of each framework as of March 2026. It is not legal advice. Always consult qualified counsel for jurisdiction-specific compliance decisions.

🛠 Free Tool

Multi-Jurisdiction Compliance Scorer

Enter your campaign details and get a pass/fail score across all five jurisdictions instantly. No account required.

Check My Campaign →Multi-Jurisdiction Compliance ScorerCheck compliance for your AI ad campaign.Audit C2PA MetadataC2PA IAB Compliance AuditorAudit your creative files for required C2PA metadata.

Where the IAB Framework Fits

The IAB AI Transparency and Disclosure Framework is not a law — it is the advertising industry's self-regulatory response that sits on top of all five legal frameworks above. Think of it as the compliance floor that is consistent across markets, even when the laws themselves are not.

The Framework introduces three tools that are relevant to multi-jurisdiction compliance:

  1. The Materiality Test. The IAB's central question — "would a reasonable consumer make a different decision if they knew this content was AI-generated?" — provides a single bright-line that approximates the trigger conditions in all five jurisdictions.
  2. C2PA Assertions. The Framework mandates embedding IPTC digitalSourceType vocabulary assertions in any AI-generated creative. This directly satisfies the machine-readable requirement in the EU AI Act and California SB 942.
  3. Tiered Disclosure Labels. The IAB specifies label language, placement, and duration standards. Building your creative templates to the IAB's Tier 2 label standard effectively makes them South Korea-compliant by default.
⚡ Practical Takeaway

If you implement the IAB Framework correctly — materiality test, C2PA assertions, Tier 2 label placement — you satisfy the technical disclosure requirements of all five jurisdictions. The one area where additional work is always required is New York's performer consent documentation, which no technical framework can substitute for. That is a legal workflow, not a metadata problem.

Conclusion: The Bottom Line for Global Campaigns

The patchwork nature of AI advertising disclosure law in 2026 creates real operational complexity — but it also creates a clear compliance architecture if you know where to look. South Korea demands action immediately. New York's June deadline is urgent for any campaign using AI-generated voices or likenesses. California's August deadline is technically demanding because of C2PA. The EU's full force arrives in 2027, but proactive C2PA implementation now is the right infrastructure investment.

The practical playbook: build your creative workflows to the IAB Framework standard, implement C2PA metadata at the tooling level, adopt the IAB's Tier 2 label placement rules as your global default, and layer the New York consent documentation workflow on top for any campaign using identifiable voice or likeness. That combination covers all five frameworks currently in force or coming into force in 2026.

TL;DR — Key Takeaways
  • South Korea's disclosure mandate is already live. If you're buying Korean inventory, your AI creatives need labels now.
  • New York penalises synthetic performer use without written consent with civil liability up to $5,000 per violation, effective June 2026.
  • California SB 942 adds machine-readable C2PA provenance requirements on top of a human-readable label from August 2026.
  • The EU AI Act's content labelling obligation hits full force in 2027, but GPAI providers are already under transparency duties.
  • The FTC treats undisclosed AI-generated endorsements as deceptive trade practices — no deadline, already enforceable.

Frequently Asked Questions

As of 2026, five major jurisdictions mandate some form of AI disclosure in advertising: the EU (AI Act), the US at federal level (FTC guidelines), New York State (Synthetic Performer Law, June 2026), California (SB 942, August 2026), and South Korea (Telecoms Business Act, live now). South Korea is the only jurisdiction with a fully active requirement currently in force for digital advertising. The EU AI Act reaches full force for content labelling in 2027.
Not all — the EU AI Act focuses on synthetic media that a reasonable person might mistake for authentic human-created content. AI-assisted creative where a human substantially edits and directs the output sits in a grey area. The IAB Framework's materiality test is a practical proxy: if the AI contribution is material enough that a consumer would make a different decision knowing it, disclosure is required. The EU enforcement timeline is phased — deployer obligations from August 2026, full content labelling from 2027.
South Korea's amended Telecoms Business Act requires that AI-generated or AI-materially-altered advertising carry a visible disclosure label throughout the entire duration of the ad — not just at the start or end. The obligation falls primarily on platform operators and broadcasters, making media buyers and DSPs responsible for ensuring compliant inventory. The law is already in effect as of early 2026.
A single label can get you very close, but not all the way. The IAB's Tier 2 disclosure label — displayed throughout the creative, with machine-readable C2PA metadata embedded — satisfies the technical requirements of all five. The gap is New York's performer consent documentation, which is a legal requirement that no label can substitute for. For campaigns that do not use identifiable real-person likenesses or voices, a well-implemented IAB label covers the disclosure obligation in all five markets.
California SB 942 (the AI Transparency Act) requires large AI providers to embed detectable provenance signals (C2PA-compatible watermarks or metadata) in AI-generated content and to offer a free detection tool. For advertisers, any AI-generated creative distributed in California through a covered platform must carry both machine-readable provenance and a human-readable label. The law takes effect August 1, 2026 and is the most technically prescriptive US state AI disclosure requirement.
It is the only law among the five that requires affirmative written consent, not just a disclosure label. Every other framework says 'tell consumers this is AI-generated.' New York says 'get permission before you generate it in a way that uses a real person's likeness or voice.' It also creates a private right of action — individuals and estates can sue, not just regulators — and extends posthumous protection for 40 years. This makes it the highest compliance-risk law for brands working with AI voice or likeness tools.
The IAB AI Transparency and Disclosure Framework is an industry self-regulatory standard that sits on top of all legal frameworks, providing a universal compliance baseline. Tools like its Materiality Test, C2PA Assertions, and Tiered Disclosure Labels help advertisers satisfy common technical requirements globally.
Penalties vary by jurisdiction. In South Korea, administrative fines run up to KRW 30 million (~$22K USD) per violation. The FTC can pursue up to $51,744/day per violation. California SB 942 and New York fine up to $5,000 per violation. The EU AI Act imposes the highest potential limit: up to €15M or 3% global revenue for operators.