TheAlgoBrief

Does My AI Ad Need a Disclosure Label?

8 min readTheAlgoBrief Editorial Team
IAB FrameworkAI AdvertisingAI DisclosureAd Tech ComplianceMateriality TestAI-Generated ImageSynthetic VoiceNY Synthetic Performer Law
The Short Answer

Under the IAB AI Transparency and Disclosure Framework (January 2026), an AI ad requires a disclosure label only when the AI involvement is material — meaning a reasonable consumer could be misled about the authenticity, identity, or reality of what they are seeing or hearing. The test is consumer impact, not AI usage volume. Realistic AI-generated images, video, voice clones of real people, and synthetic personas in focal roles require disclosure. Routine photo editing, generic AI voiceovers, AI-generated copy, and clearly stylized content generally do not. When disclosure is required, the IAB specifies exact label text: "AI-generated image," "AI-generated video," "AI-generated voice," "AI-generated person," or "AI-powered assistant" — each placed visibly near the relevant content.


The Question Every Creative Team Is Asking Right Now

AI is now embedded in nearly every stage of ad production — generating lifestyle imagery in Midjourney, writing headline variants in ChatGPT, cloning a voice in ElevenLabs. Most creative teams using these tools have no consistent policy on when, or whether, to put a label on the finished ad.

That ambiguity just got a lot more expensive. The New York Synthetic Performer Law — carrying civil penalties of $1,000–$5,000 per violation — takes effect in June 2026. The EU AI Act's labelling provisions are phasing in. And platforms including Meta, TikTok, and YouTube already auto-apply AI labels under their own detection logic, independent of advertiser intent.

The IAB's new framework gives the industry a baseline to work from. It won't resolve every edge case — but it does establish a decision logic you can operationalize.


The Core Principle: Materiality, Not Volume

The most important thing to understand about the IAB framework is what it doesn't say. It does not say: "If you used an AI tool in production, you must disclose." That standard would be unworkable — AI is now in Photoshop, in Premiere, in every spell checker and caption generator on the market.

The IAB Threshold

"Materiality" refers to AI use that could mislead a reasonable consumer about what is authentic, factual, or human-created in an advertisement. The threshold is consumer impact, not the volume of AI tools used.

In other words: the question is not "did we use AI?" — it's "could a reasonable person watching this be misled about the nature of what they're seeing or hearing?"

The framework applies this through three evaluative criteria:

01

Deception Potential

Could a reasonable consumer be misled about the nature, origin, or authenticity of what they're viewing? If it's clearly fantastical or illustrated, people know it's not real photography.

IF YES

Continue to Check 2

IF NO

No disclosure required

02

Material Impact

Does the AI involvement affect product representation, performance claims, or social proof in ways that could influence purchase decisions? Retouching a shadow is not material. Making a product look slimmer than it is, or using a fake person to endorse it, is.

IF YES

Continue to Check 3

IF NO

No disclosure required

03

Expectation Alignment

Does the content fall outside what consumers would reasonably expect from standard creative production? Colour grading is routine. AI-generated people who don't exist, or deepfakes of real individuals, are not.

IF YES

All checks passed — Disclosure Required

IF NO

No disclosure required

All three criteria point at the same target: the consumer's ability to make an informed assessment. Disclosure is required when omitting it would deceive a reasonable consumer about something material.

Free Tool — 60 secondsIAB Materiality CheckerAnswer five questions about your creative and get an instant, rule-cited verdict — with the exact IAB label text and placement guidance.

When AI Ad Disclosure Is Required: Rules by Content Type

The IAB framework applies the materiality test differently across five content types. Each has specific decision branches, specific exemptions, and specific label texts.

AI-Generated Images

The key variable is photorealism — not the tool used, not whether the image started from a real photograph.

ScenarioRequired?IAB Label
Fully AI-generated, realistic✅ Yes
Realistic photo, routine editing only❌ No
Background replaced, product unchanged❌ No
Clearly stylized or illustrated❌ No

A photorealistic AI render of a coffee cup on a kitchen counter requires disclosure. A clearly illustrated version does not. The trickiest edge case is substantive editing of real photography: AI changes that alter the appearance of the product itself or of a real person in the ad require disclosure. Background-only replacements are generally exempt.

AI-Generated Video

Video rules follow image logic but add a persistence requirement: when a label is required, it must remain visible for the entire video — not just the opening frame.

ScenarioRequired?IAB Label
Fully AI-generated, realistic✅ Yes
Standard post-production on real footage❌ No
Clearly animated or stylized❌ No

AI upscaling, frame interpolation, and noise reduction are all exempt — these are established post-production techniques a consumer neither expects to see disclosed nor would find material.

AI-Generated Audio and Voice

Audio rules are built around identity and authenticity, and they carry the sharpest legal exposure of any content type for U.S. advertisers. The framework's audio provisions map directly onto the New York Synthetic Performer Law.

ScenarioRequired?IAB Label
Generic AI voice, no specific identity❌ No
Living person, authorized scripted commercial❌ No
Living person, fabricated statements about real events✅ Yes
Deceased person's voice (any usage)✅ Yes

The key distinction: an authorized AI voice clone of a living athlete reading a scripted endorsement is treated like a standard voiceover performance — no disclosure required. The same voice clone being made to describe a race they never competed in requires disclosure. The risk isn't the synthetic voice; it's the fabricated statement.

June 2026 Deadline — New York

The New York Synthetic Performer Law takes effect in June 2026 and carries civil penalties of $1,000–$5,000 per occurrence for undisclosed use of AI-synthesized voices or likenesses of real individuals — living or deceased. Any agency running audio campaigns in New York markets needs a disclosure review in their production workflow before that date.

Urgent: June 2026 deadlineNY Synthetic Performer Law: What Ad Agencies Need to Know Before June 2026Full breakdown of what the NY law requires specifically, how fines are calculated per occurrence, and what a compliant audio workflow looks like.

Synthetic Influencers and AI Personas

This is the highest-risk category for regulatory fines across both NY and California, and the one where brands are most likely to have compliance gaps they don't know about.

ScenarioRequired?IAB Label
Digital twin of deceased person (any role)✅ Always
Living person twin, fabricated events✅ Yes
Photorealistic persona, primary focal role✅ Yes
AI chatbot simulating human interaction✅ Yes
Authorized digital twin, standard scripted ad❌ No
Cartoon mascots, clearly animated characters❌ No
AI-generated people in incidental backgrounds❌ No

The deceased performer rule is a hard line with no exceptions — estate authorization, payment, or respectful intent does not remove the disclosure requirement. For AI chatbots, disclosure must appear at the moment the consumer begins interacting, not in a post-interaction screen.

AI-Generated Text and Copy

Unambiguous: the IAB framework does not require consumer-facing disclosure for AI-generated text. Headlines, slogans, product descriptions, email subject lines, social captions — none of these trigger a disclosure requirement. The principle that governs text is truth in claims, not authorship transparency: advertisers remain responsible for the accuracy of every statement, AI-written or not.


Quick Reference Table

Content TypeAI UsageDisclosure Required?IAB Label
ImageFully AI-generated, realistic✅ Yes"AI-generated image"
ImageRoutine editing only❌ No
ImageBackground replaced, product unchanged❌ No
ImageClearly stylized / illustrated❌ No
VideoFully AI-generated, realistic✅ Yes"AI-generated video"
VideoStandard post-production❌ No
VideoClearly animated❌ No
AudioGeneric AI voice❌ No
AudioLiving person, authorized scripted❌ No
AudioLiving person, fabricated statements✅ Yes"AI-generated voice"
AudioDeceased person's voice✅ Yes"AI-generated voice"
Synthetic PersonPhotorealistic, primary role✅ Yes"AI-generated person"
Synthetic PersonDigital twin, deceased✅ Always"AI-generated likeness"
Synthetic PersonLiving twin, authorized standard ad❌ No
Synthetic PersonLiving twin, fabricated events✅ Yes"AI-generated likeness"
Synthetic PersonCartoon / animated❌ No
Synthetic PersonAI chatbot in ad✅ Yes"AI-powered assistant"
Text & CopyAny AI-generated copy❌ No

What the Label Actually Has to Look Like

Getting the disclosure requirement right is only half the problem. The label itself has to meet the IAB's format standards — or it doesn't count.

The primary method is a text label using the specific IAB strings above (not paraphrases like "powered by technology"). The label must appear near the content, remain visible for the consumer's full exposure, use plain language, and provide sufficient color contrast to be legible.

When text labels would materially compromise the creative — constrained banner units, small format ads — the framework allows alternatives: platform visual indicators (the sparkle icon ✨, the C2PA Content Credentials icon, or platform-applied badges), hover/tap information icons in digital environments, or adjacent placement next to rather than on the creative.

Platform Labels Are Separate

Meta, TikTok, and YouTube may automatically apply their own AI labels based on internal detection — independent of whether you applied one. The IAB framework governs advertiser decision-making; platform labels are an additional layer. Advertisers need to comply with both.


Is the IAB Framework Actually Binding?

The framework itself is voluntary industry guidance — no regulatory body enforces it directly. But the framework was designed to align with the laws that are binding:

The New York Synthetic Performer Law (effective June 2026) requires conspicuous disclosure of AI-generated performers and carries civil penalties of Fine: Up to $5,000 per violation per occurrence. California SB 942 (August 2026) covers platforms with 1M+ users and requires embedded metadata disclosures. The EU AI Act Article 50 requires AI labelling as a consumer protection baseline, with full force in 2027. South Korea's Telecommunications Act already mandates blanket labelling of all AI-generated photos and videos in advertisements — no materiality test, no exemptions.

Compliance with the IAB framework doesn't guarantee compliance with all of these. But it gets agencies to a documented, defensible standard for most US campaigns — and provides the foundation for layering in jurisdiction-specific requirements. If your workflow involves C2PA metadata, the C2PA IAB Compliance Auditor checks your asset files for missing and inconsistent fields.


The Implied-Truth Effect: Why Getting This Wrong Cuts Both Ways

There's a counterintuitive risk worth flagging. Advertisers who apply disclosure labels responsibly can inadvertently create a disadvantage relative to those who don't: consumers tend to perceive unlabeled content as more credible simply because other content carries a disclosure. The IAB framework calls this the implied-truth effect.

This is precisely why the industry needs a consistent standard rather than voluntary opt-in. Inconsistent disclosure doesn't protect consumers — it rewards non-disclosure. The framework's goal is to establish a baseline where labeled content earns trust, and unlabeled content that should have been labeled faces genuine legal and reputational risk.


Frequently Asked Questions

It depends on materiality, not on whether AI was used. Under the IAB AI Transparency and Disclosure Framework (January 2026), disclosure is required only when AI materially shapes content in a way that could mislead a reasonable consumer about authenticity, identity, or representation. The threshold is consumer impact, not the volume of AI tools used.
The IAB applies three criteria: (1) Deception Potential — could a reasonable consumer be misled about authenticity? (2) Material Impact — does AI involvement affect product representation or purchase decisions? (3) Expectation Alignment — does the content fall outside what consumers would expect from standard production? All three point at the same question: would a consumer care if they knew?
Exact strings: 'AI-generated image' for synthetic images, 'AI-generated video' for video, 'AI-generated voice' for audio, 'AI-generated person' for photorealistic synthetic personas, 'AI-generated likeness' for digital twins of real individuals, and 'AI-powered assistant' for AI chatbots. Paraphrases do not meet the IAB standard.
No. The IAB framework explicitly states that AI-generated text — headlines, slogans, product descriptions, email subject lines, and social posts — generally does not require consumer-facing disclosure. Advertisers remain responsible for the accuracy of all claims regardless of how the copy was written.
It depends on whose voice and what they say. Always required: voice of a deceased person (even with estate authorization), and voice of a living person making fabricated statements about real-world events. Not required: generic AI voiceovers for narration, and authorized synthetic voice of a living person in scripted commercial content.
The IAB framework requires the label to remain visible for the entire duration of the video — not just a minimum number of seconds. The label must be on screen for as long as the AI-generated content is on screen. A brief opening disclosure that then disappears does not satisfy the requirement.
For audio-only ads (radio, podcast, smart speaker) longer than 60 seconds, the spoken verbal disclosure must be made at least twice — not just once. The disclosure must be in the same language as the advertisement, at a normal conversational pace, before or immediately after the AI-generated segment.
The IAB framework itself is voluntary. However, it aligns with binding laws carrying real penalties: New York Synthetic Performer Law (June 2026, $1,000–$5,000 per occurrence), California SB 942 (August 2026), EU AI Act (full force 2027), and South Korea's Telecommunications Act (already in effect with blanket labelling requirements, no materiality exemption).
Yes, in the areas where they overlap. The IAB framework was designed to harmonize with the New York law. Following the IAB's label texts and placement rules for audio and synthetic personas should meet the requirements of the NY Synthetic Performer Law effective June 2026.
Platform-applied labels can satisfy the IAB requirement when applied automatically based on C2PA metadata and when they meet IAB conspicuousness standards. However, advertisers remain responsible for verifying the platform has actually applied the label. Relying on platform enforcement without verification is not a compliance strategy.