Does My AI Ad Need a Disclosure Label?
Under the IAB AI Transparency and Disclosure Framework (January 2026), an AI ad requires a disclosure label only when the AI involvement is material — meaning a reasonable consumer could be misled about the authenticity, identity, or reality of what they are seeing or hearing. The test is consumer impact, not AI usage volume. Realistic AI-generated images, video, voice clones of real people, and synthetic personas in focal roles require disclosure. Routine photo editing, generic AI voiceovers, AI-generated copy, and clearly stylized content generally do not. When disclosure is required, the IAB specifies exact label text: "AI-generated image," "AI-generated video," "AI-generated voice," "AI-generated person," or "AI-powered assistant" — each placed visibly near the relevant content.
The Question Every Creative Team Is Asking Right Now
AI is now embedded in nearly every stage of ad production — generating lifestyle imagery in Midjourney, writing headline variants in ChatGPT, cloning a voice in ElevenLabs. Most creative teams using these tools have no consistent policy on when, or whether, to put a label on the finished ad.
That ambiguity just got a lot more expensive. The New York Synthetic Performer Law — carrying civil penalties of $1,000–$5,000 per violation — takes effect in June 2026. The EU AI Act's labelling provisions are phasing in. And platforms including Meta, TikTok, and YouTube already auto-apply AI labels under their own detection logic, independent of advertiser intent.
The IAB's new framework gives the industry a baseline to work from. It won't resolve every edge case — but it does establish a decision logic you can operationalize.
The Core Principle: Materiality, Not Volume
The most important thing to understand about the IAB framework is what it doesn't say. It does not say: "If you used an AI tool in production, you must disclose." That standard would be unworkable — AI is now in Photoshop, in Premiere, in every spell checker and caption generator on the market.
"Materiality" refers to AI use that could mislead a reasonable consumer about what is authentic, factual, or human-created in an advertisement. The threshold is consumer impact, not the volume of AI tools used.
In other words: the question is not "did we use AI?" — it's "could a reasonable person watching this be misled about the nature of what they're seeing or hearing?"
The framework applies this through three evaluative criteria:
Deception Potential
Could a reasonable consumer be misled about the nature, origin, or authenticity of what they're viewing? If it's clearly fantastical or illustrated, people know it's not real photography.
Continue to Check 2
No disclosure required
Material Impact
Does the AI involvement affect product representation, performance claims, or social proof in ways that could influence purchase decisions? Retouching a shadow is not material. Making a product look slimmer than it is, or using a fake person to endorse it, is.
Continue to Check 3
No disclosure required
Expectation Alignment
Does the content fall outside what consumers would reasonably expect from standard creative production? Colour grading is routine. AI-generated people who don't exist, or deepfakes of real individuals, are not.
All checks passed — Disclosure Required
No disclosure required
All three criteria point at the same target: the consumer's ability to make an informed assessment. Disclosure is required when omitting it would deceive a reasonable consumer about something material.
Free Tool — 60 secondsIAB Materiality CheckerAnswer five questions about your creative and get an instant, rule-cited verdict — with the exact IAB label text and placement guidance.When AI Ad Disclosure Is Required: Rules by Content Type
The IAB framework applies the materiality test differently across five content types. Each has specific decision branches, specific exemptions, and specific label texts.
AI-Generated Images
The key variable is photorealism — not the tool used, not whether the image started from a real photograph.
| Scenario | Required? | IAB Label |
|---|---|---|
| Fully AI-generated, realistic | ✅ Yes | |
| Realistic photo, routine editing only | ❌ No | — |
| Background replaced, product unchanged | ❌ No | — |
| Clearly stylized or illustrated | ❌ No | — |
A photorealistic AI render of a coffee cup on a kitchen counter requires disclosure. A clearly illustrated version does not. The trickiest edge case is substantive editing of real photography: AI changes that alter the appearance of the product itself or of a real person in the ad require disclosure. Background-only replacements are generally exempt.
AI-Generated Video
Video rules follow image logic but add a persistence requirement: when a label is required, it must remain visible for the entire video — not just the opening frame.
| Scenario | Required? | IAB Label |
|---|---|---|
| Fully AI-generated, realistic | ✅ Yes | |
| Standard post-production on real footage | ❌ No | — |
| Clearly animated or stylized | ❌ No | — |
AI upscaling, frame interpolation, and noise reduction are all exempt — these are established post-production techniques a consumer neither expects to see disclosed nor would find material.
AI-Generated Audio and Voice
Audio rules are built around identity and authenticity, and they carry the sharpest legal exposure of any content type for U.S. advertisers. The framework's audio provisions map directly onto the New York Synthetic Performer Law.
| Scenario | Required? | IAB Label |
|---|---|---|
| Generic AI voice, no specific identity | ❌ No | — |
| Living person, authorized scripted commercial | ❌ No | — |
| Living person, fabricated statements about real events | ✅ Yes | |
| Deceased person's voice (any usage) | ✅ Yes |
The key distinction: an authorized AI voice clone of a living athlete reading a scripted endorsement is treated like a standard voiceover performance — no disclosure required. The same voice clone being made to describe a race they never competed in requires disclosure. The risk isn't the synthetic voice; it's the fabricated statement.
The New York Synthetic Performer Law takes effect in June 2026 and carries civil penalties of $1,000–$5,000 per occurrence for undisclosed use of AI-synthesized voices or likenesses of real individuals — living or deceased. Any agency running audio campaigns in New York markets needs a disclosure review in their production workflow before that date.
Synthetic Influencers and AI Personas
This is the highest-risk category for regulatory fines across both NY and California, and the one where brands are most likely to have compliance gaps they don't know about.
| Scenario | Required? | IAB Label |
|---|---|---|
| Digital twin of deceased person (any role) | ✅ Always | |
| Living person twin, fabricated events | ✅ Yes | |
| Photorealistic persona, primary focal role | ✅ Yes | |
| AI chatbot simulating human interaction | ✅ Yes | |
| Authorized digital twin, standard scripted ad | ❌ No | — |
| Cartoon mascots, clearly animated characters | ❌ No | — |
| AI-generated people in incidental backgrounds | ❌ No | — |
The deceased performer rule is a hard line with no exceptions — estate authorization, payment, or respectful intent does not remove the disclosure requirement. For AI chatbots, disclosure must appear at the moment the consumer begins interacting, not in a post-interaction screen.
AI-Generated Text and Copy
Unambiguous: the IAB framework does not require consumer-facing disclosure for AI-generated text. Headlines, slogans, product descriptions, email subject lines, social captions — none of these trigger a disclosure requirement. The principle that governs text is truth in claims, not authorship transparency: advertisers remain responsible for the accuracy of every statement, AI-written or not.
Quick Reference Table
| Content Type | AI Usage | Disclosure Required? | IAB Label |
|---|---|---|---|
| Image | Fully AI-generated, realistic | ✅ Yes | "AI-generated image" |
| Image | Routine editing only | ❌ No | — |
| Image | Background replaced, product unchanged | ❌ No | — |
| Image | Clearly stylized / illustrated | ❌ No | — |
| Video | Fully AI-generated, realistic | ✅ Yes | "AI-generated video" |
| Video | Standard post-production | ❌ No | — |
| Video | Clearly animated | ❌ No | — |
| Audio | Generic AI voice | ❌ No | — |
| Audio | Living person, authorized scripted | ❌ No | — |
| Audio | Living person, fabricated statements | ✅ Yes | "AI-generated voice" |
| Audio | Deceased person's voice | ✅ Yes | "AI-generated voice" |
| Synthetic Person | Photorealistic, primary role | ✅ Yes | "AI-generated person" |
| Synthetic Person | Digital twin, deceased | ✅ Always | "AI-generated likeness" |
| Synthetic Person | Living twin, authorized standard ad | ❌ No | — |
| Synthetic Person | Living twin, fabricated events | ✅ Yes | "AI-generated likeness" |
| Synthetic Person | Cartoon / animated | ❌ No | — |
| Synthetic Person | AI chatbot in ad | ✅ Yes | "AI-powered assistant" |
| Text & Copy | Any AI-generated copy | ❌ No | — |
What the Label Actually Has to Look Like
Getting the disclosure requirement right is only half the problem. The label itself has to meet the IAB's format standards — or it doesn't count.
The primary method is a text label using the specific IAB strings above (not paraphrases like "powered by technology"). The label must appear near the content, remain visible for the consumer's full exposure, use plain language, and provide sufficient color contrast to be legible.
When text labels would materially compromise the creative — constrained banner units, small format ads — the framework allows alternatives: platform visual indicators (the sparkle icon ✨, the C2PA Content Credentials icon, or platform-applied badges), hover/tap information icons in digital environments, or adjacent placement next to rather than on the creative.
Meta, TikTok, and YouTube may automatically apply their own AI labels based on internal detection — independent of whether you applied one. The IAB framework governs advertiser decision-making; platform labels are an additional layer. Advertisers need to comply with both.
Is the IAB Framework Actually Binding?
The framework itself is voluntary industry guidance — no regulatory body enforces it directly. But the framework was designed to align with the laws that are binding:
The New York Synthetic Performer Law (effective June 2026) requires conspicuous disclosure of AI-generated performers and carries civil penalties of Fine: Up to $5,000 per violation per occurrence. California SB 942 (August 2026) covers platforms with 1M+ users and requires embedded metadata disclosures. The EU AI Act Article 50 requires AI labelling as a consumer protection baseline, with full force in 2027. South Korea's Telecommunications Act already mandates blanket labelling of all AI-generated photos and videos in advertisements — no materiality test, no exemptions.
Compliance with the IAB framework doesn't guarantee compliance with all of these. But it gets agencies to a documented, defensible standard for most US campaigns — and provides the foundation for layering in jurisdiction-specific requirements. If your workflow involves C2PA metadata, the C2PA IAB Compliance Auditor checks your asset files for missing and inconsistent fields.
The Implied-Truth Effect: Why Getting This Wrong Cuts Both Ways
There's a counterintuitive risk worth flagging. Advertisers who apply disclosure labels responsibly can inadvertently create a disadvantage relative to those who don't: consumers tend to perceive unlabeled content as more credible simply because other content carries a disclosure. The IAB framework calls this the implied-truth effect.
This is precisely why the industry needs a consistent standard rather than voluntary opt-in. Inconsistent disclosure doesn't protect consumers — it rewards non-disclosure. The framework's goal is to establish a baseline where labeled content earns trust, and unlabeled content that should have been labeled faces genuine legal and reputational risk.
Frequently Asked Questions
Related Reading
IAB AI Transparency & Disclosure Framework: Complete Guide 2026
The hub guide covering all five content types, the C2PA metadata standard, and how the framework maps to every major jurisdiction. Start here for full context.
NY Synthetic Performer Law: What Ad Agencies Need to Know Before June 2026
Detailed breakdown of the NY law, how fines are calculated per occurrence, and what a compliant production workflow looks like before the June deadline.
AI Ad Disclosure Laws by Country: EU, US, South Korea 2026
Side-by-side comparison of all major disclosure laws, formatted for compliance teams who need a quick reference across jurisdictions.
What Is C2PA and Why Does It Matter for AI Advertising?
A technical deep-dive on content provenance, the IAB custom assertions, and how ad tech infrastructure is being built around the C2PA standard.
AI-Generated Voice in Ads: When Is Disclosure Required?
Every audio edge case: generic voices, real persons (living and deceased), voice clones, podcast reads, and radio spots — with the 60-second rule explained.