This month in AI: brands say no to agencies, paid AI search, new regulations and the rise of AI influencers

This month in AI: brands say no to agencies, paid AI search, new regulations and the rise of AI influencers

4 minute read

In today’s rapidly changing digital and technological landscape, keeping pace with the latest trends can be challenging. As AI transforms the marketing industry at an unprecedented rate, staying informed has never been more important. That’s why I am launching WFA’s monthly AI update, serving as your one-stop round up of the most important developments impacting your use of AI.

Article details

  • Associate Director, Digital Policy, WFA
Opinions
29 April 2024
AI Community visual grey
Join our AI Community if you'd like to gain practical insights into how to leverage the potential of AI in an effective, safe and responsible way.

Brands clamp down on agency use of generative AI

Press reports suggest that certain brands are placing restrictions (and in some cases, total bans) on the way their partners use AI on their behalf. This reflects mounting concerns around the reputational and legal risks associated with generative AI use, notably around company confidentiality, copyright and IP, and privacy.

In fact, a WFA benchmark revealed that more than 50% of member respondents are introducing policies restricting the use of generative AI in external-facing marketing creative. Even Google announced it won’t be allowing advertisers to create images depicting people or brands’ logos in its new gen AI product. But fear not: marketers will still be able to generate deepfakes of domestic animals!

Moves to restrict GenAI in marketing creative also reflect the lack of transparency brands have around how the technology is being use on their behalf and efforts to take back control. WFA research from September 2023 found that only 3% of brands are fully aware of how their partners use generative AI.

Such actions could create potential for friction with agencies, consultancies and others looking to ramp up their investments in and use of AI. WPP and Google are the latest to announce a collaboration as part of efforts to ‘redefine’ generative AI-driven marketing.

On 30 May, WFA’s In-House Forum will explore GenAI use within in-house creative agencies. You can register here.

Ad-funded misinformation compromises brand integrity

Generative AI provides a more accessible and efficient way to intentionally produce harmful content and disinformation at scale, creating an increased risk that brands could appear alongside or fund harmful AI-generated content. Ad agency IPG and brand suitability company Zefr have found that such placements can have a real reputational impact, with ads appearing next to misinformation seen as less trustworthy. Half of consumers believe a brand’s integrity is compromised when it’s seen alongside misinformation.

WFA’s Global Alliance for Responsible Media (GARM) is currently surveying the online platforms to discover more about their approaches to transparency and accountability when it comes to the monetisation of AI-generated harmful content. Results are due in the next few months.

AI-generated influencers are coming

Social media platforms are starting to explore the world of AI influencers, with TikTok reportedly testing out a new feature that would enable brands to develop virtual influencers to promote and sell items on the platform. Meta is taking a different approach, testing a program known as ‘Creator AI’ that would offer some of its most popular influencers with the ability to generate AI versions of themselves.

The challenge for brands will be to manage appropriate disclosures, particularly as research continues to find that only 20% of influencers systematically disclose their posts as advertising, despite regulatory obligations. AI influencers may be easier to manage but questions arise as to whether they will help brands build authentic connections with consumers or contribute to an overall decline in trust.

It’s time to pay for AI-generated search

The Financial Times reports that Google is considering charging users for its AI-powered search engine, while keeping its traditional search engine free of charge. The move reflects concerns that its ad business could suffer if its AI-powered search engine provides more complete answers that no longer require users to click through its advertisers’ websites. The FT suggests that ads would continue to appear alongside search results, even on the paid version.

Regulations tighten in the EU with the US set to follow

The EU has passed the world’s first comprehensive AI regulation, which will require brands to disclose any AI ‘generated’ or ‘manipulated’ content that ‘resembles existing persons, objects, places, entities or events’. And 75% of consumers want brands to disclose if content they are engaging with is created with the help of AI.

The new rules will also require AI developers to provide transparency into the data used to train their models, helping brands understand the potential risks of using their systems.

Similar proposals have been put forward in the US at both state and federal level. Tennessee recently adopted a law requiring businesses to seek consent to replicate artists’ voices for advertising purposes due to concerns about AI-generated voice clones. A new federal proposal would require AI developers to seek consent to use personal data to train AI models, reflecting rising as concern mounts about personal data scraping.

If you’d like to join a group of like-minded peers to gain practical insights into how to leverage the potential of AI in an effective, safe and responsible way, then please get involved in our AI Community. The next meeting will be taking place on 25th June and you can register here.

Please send across any tips, developments and interesting insights to me at g.robitaille@wfanet.org.

Article details

  • Associate Director, Digital Policy, WFA
Opinions
29 April 2024