Global brands call for clearer consensus on AI labelling as usage accelerates, WFA research

Global brands call for clearer consensus on AI labelling as usage accelerates, WFA research

4 minute read

Eighty-two percent of multinationals believe transparency is essential for protecting brand reputation

WFA launches new best-practice guidance for AI-generated marketing creative

Article details

Guides & templatesSurvey
2 April 2026

Global brands want clearer advice on the use of AI in marketing so they know when and how to disclose its use. While over two-thirds (67%) have already developed internal rules but eight in 10 want global guidance.

Research among WFA members reveals that brands are struggling with three main issues: unclear or fragmented regulatory obligations (61%), uncertainty about consumer expectations (46%) and lack of industry best practice (39%).

The issue has become urgent as more brands adopt AI tools and technology. Already more than three in four (78%) global brands are using AI-generated or AI-enhanced content in consumer-facing marketing and marketers overwhelmingly see transparency as critical. Eighty-two percent of brands believe transparency around AI use is essential for protecting brand reputation, while 79% say it is key to maintaining consumer trust.

To help brands navigate this issue, the WFA is today publishing new voluntary best-practice guidance on transparency in AI-generated marketing creative, developed in collaboration with the International Council for Advertising Self-Regulation (ICAS) and advertising self-regulatory organisations from around the world.

The guidance provides practical principles and real-world examples designed to help marketers determine when disclosure of AI use is appropriate and when it may not be necessary. It also helps brands identify and avoid misleading use of AI such to exaggerate product results or falsify celebrity endorsements.

The guidance has been informed by new WFA research identifying concerns and current policies at 27 multinational brands with a cumulative global ad spend of $71bn.

Among the 78% already using AI in external marketing creative, 87% are using it for product images, 80% for marketing copy and 77% for background visuals. Efficiency and cost savings remain the main drivers but the data also reveals signs of caution with just 33% using AI to replicate or enhance images of real people, and only 18% using AI to generate fully synthetic humans.

There is strong support among respondents for disclosure in certain scenarios. Ninety-six percent believe that an AI-generated voice that audiences might assume is human should be disclosed, while 91% believe a synthetic human playing a central role in an ad should be labelled. By contrast, only 4% believe AI-generated decorative backgrounds need disclosure, suggesting that context and consumer impact are key considerations.

The guidance categorises common uses of generative AI in marketing creative across five areas: people and likeness, product images, audio, background visuals and marketing copy. For each category, it provides practical examples of when disclosure may be appropriate and when it may not be necessary.

“Generative AI is already transforming how marketing content is produced,” said Stephan Loerke, CEO at WFA. “But with this new capability comes a responsibility to ensure advertising remains transparent and trustworthy. Brands want to get this right, yet many are navigating a complex landscape of emerging regulations, evolving platform policies and uncertain consumer expectations. WFA’s new guidance helps brands by offering them a practical framework to they can use to develop internal policies.”

In working on this guidance with ICAS, the WFA seeks to help brands who are struggling with the issue in the face of rapidly developing legal requirements, evolving platform rules as well as consumer backlashes against brands that get it wrong.

The EU’s AI Act will require labelling of so-called ‘deep fakes’ from August 2026, while laws in markets such as California and China are also introducing disclosure requirements.

However, most of these legal frameworks do not clearly define how transparency obligations should apply in a marketing context, leaving significant room for interpretation. In the absence of legal clarity, some online platforms including Meta, Google, TikTok and more have already started defining their approaches, acting as de facto regulators.

This mishmash of rules and regulations highlights a broader strategic question for brands: how to balance transparency with simplicity. While disclosure can build trust, over-labelling risks overwhelming consumers or creating confusion if applied inconsistently across channels and markets.

“AI presents enormous opportunities for brands to improve marketing effectiveness and creativity,” said Sibylle Stanciu-Loeckx, Director at International Council for Advertising Self-Regulation (ICAS). “But maintaining consumer trust will be fundamental to unlocking that potential. Advertising self-regulation has an important role to play in helping the industry respond quickly and responsibly to emerging technologies. By developing practical guidance grounded in established self-regulatory principles, we aim to support brands in using AI in ways that remain transparent, responsible and worthy of consumer trust.

WFA and ICAS plan to update the guidance over time as technology evolves and regulatory frameworks mature.

The full guidance and research findings are available here.

.

Article details

Guides & templatesSurvey
2 April 2026

Contact us