This month in AI: brands highlight AI gains, Meta and Adobe back track on policy changes, AI ad fraud and TikTok avatars

This month in AI: brands highlight AI gains, Meta and Adobe back track on policy changes, AI ad fraud and TikTok avatars

4 minute read

In today’s rapidly changing digital and technological landscape, keeping pace with the latest trends can be challenging. WFA’s monthly AI update serves as your one-stop round up of the most important developments to help you stay informed of the forces transforming the marketing industry. Gabrielle Robitaille, Associate Director, Digital Policy at the WFA identifies six important announcements.

Article details

  • Associate Director, Digital Policy, WFA
Opinions
4 July 2024
AI Community visual grey
Join our AI Community if you'd like to gain practical insights into how to leverage the potential of AI in an effective, safe and responsible way.

WFA publishes responsible AI principles

WFA research from September 2023 revealed that only 44% of global brands have responsible AI principles guiding their organisations’ use of generative AI for marketing purposes. To support members on their responsible AI journey, we have developed a compilation of the most commonly adopted principles along with standardised definitions.

To help bring them to life, WFA is hosting a series of workshops to co-create actionably, voluntary guidance on how to operationalise these principles across the most pressing generative AI marketing use cases. The first workshop takes place on 3 September and you can register here.

Brands share insights on creativity and productivity gains

Global brands have started sharing insights into how generative AI is driving marketing effectiveness and efficiencies. The CMO of fintech company Klarna shared that generative AI usage has resulted in 25% cost savings in external agency expenses, a $6 million reduction in image production costs and a decrease in image development cycle times from six weeks to just seven days.

Reckitt’s CMO revealed how generative AI pilots had resulted in a 60% decrease in time taken for product concept development, a 30% decrease in time needed to adapt and localise ads, and a 90% reduction in time spent on post-campaign media analysis.

To uncover more about how brands are using generative AI we would greatly appreciate your input to this WFA generative AI survey.

Scrutiny builds on generative AI model training

In early June, Meta announced it was introducing a new privacy policy change which would allow it to use years of personal posts, private images and online tracking data to build and train its AI models, without user consent.

This policy change has been put on hold in the EU following complaints filed by privacy NGO None of Your Business claiming that these practices violated data protection regulation. In Brazil, the privacy regulator has ordered Meta to stop processing personal data of its users for AI model training entirely.

Adobe also came under fire after announcing a service update that would have allowed it to use and own any content uploaded and generated by Adobe Firefly users. Adobe published a blog clarifying that its tools are not trained on customer content, that it doesn’t assume ownership of customers’ works and that its models are trained on datasets of licensed content and content in the public domain.

These developments highlight the growing lack of trust among regulators, consumers and business users alike over how AI companies are using data to train their models.

ChatGPT-4 and Gemini 1.0 ranked among least transparent generative AI models

Stanford University has ranked Chat GPT-4 and Gemini as the most opaque GenAI models. The Foundation Model Transparency Index found all models to be generally quite opaque; with an average transparency score of 58 out of 100. The likes of StarCoder, Jurassic-2 and Luminous ranked the highest.

Under the EU and Colorado AI Acts, OpenAI, Google, Meta, Adobe and other developers will soon be required to share detailed information about how their AI models were trained, how algorithmic processes work and the steps taken to mitigate risk. It remains to be seen how these new provisions will be applied in practice.

WFA provided an overview of regulatory developments at the last AI Community meeting, and Unilever shared their perspectives into what these mean for brand marketers. As a WFA member, you can download the slides here.

Generative AI held responsible for surge in ad fraud and image disinformation

A new report by digital measurement platform DoubleVerify has found that generative AI is driving more than 20% increases in new ad fraud schemes. According to the report, generative AI enables malicious actors to generate thousands of seemingly authentic user agents to mimic human behaviour, making bot traffic patterns more difficult to detect. 

Research by Google and numerous fact-checking organisations has found that AI-generated misinformation has rapidly increased since spring 2023. WFA’s Global Alliance for Responsible Media is working together with social media platforms to drive transparency into how they are addressing AI-generated misinformation.

For more information, reach out to Rob Rakowitz at r.rakowitz@wfanet.org.

TikTok unveils virtual avatars for brands

TikTok officially announced ‘Symphony Digital Avatars’ at Cannes Lions enabling brands to use virtual influencers to promote and sell products on the platform. Brands will have access to two types of digital avatars: pre-built avatars created using paid actors that are licensed for commercial use, and custom avatars, which are crafted to represent a creator or a brand spokesperson.

It remains unclear how consumers will respond to the use of such avatars, and whether they will help build or reduce brand authenticity and trust. WFA is hosting a session with TikTok on 4 September to dig into TikTok’s new suite of AI products. You can register for this meeting here.

Please send across any tips, developments and interesting insights to g.robitaille@wfanet.org.

Article details

  • Associate Director, Digital Policy, WFA
Opinions
4 July 2024