This month in AI: contract best practice for GenAI, consumers still divided on ad use, ad-funded chatbots and more privacy concerns
In today’s rapidly changing digital and technological landscape, keeping pace with the latest trends can be challenging. WFA’s monthly AI update serves as your one-stop round up of the most important developments to help you stay informed of the forces transforming the marketing industry. Gabrielle Robitaille, Associate Director, Digital Policy at WFA, identifies the most important announcements and events of the last month.
Share this post
WFA publishes generative AI contract best practice
According to recent WFA research, 80% of brands are concerned about how partners, including media and creative agencies, are using generative AI on their behalf. As a result, more than half plan to review their contracts with agencies to ensure responsible generative AI use and governance across the supply chain.
To support marketers’ contract reviews, WFA, in partnership with marketing consultancy R3, has developed voluntary guidance for members on generative AI contract best practice. The objective is to assist brands drive transparency and trust in generative AI adoption while also safeguarding brand reputation.
Consumers divided by use of GenAI in ads
A recent Ad Age-Harris poll of more than 1,000 US adults above the age of 18 reveals that consumers are split on whether brands should be using generative AI to create ads. According to the results, less than half of respondents (45%) said brands should use the technology to create ads, with 36% opposing and the rest (19%) saying they were not sure.
Recent research from marketing research company IPSOS presented at a WFA AI Community meeting earlier this month revealed that relying on generative AI too much in ad creative could undermine both short and long-term ad creative effectiveness. Members can download the recording from the meeting here.
WFA is keeping track of research digging into consumer perspectives on the use of GenAI in advertising and marketing. You can download the overview of existing research here.
New tools help brands understand how they’re appearing in GenAI-powered search
According to reports by Digiday, CRM and SEO providers are developing tools to give advertisers more information about how their brands appear on GenAI powered search engines such as ChatGPT and Perplexity.
This comes as consumers increasingly turn to generative AI chatbots for search purposes. In fact, usage of tools like ChatGPT to answer questions is reportedly up 37%, while search engines are down 11%.
These new tools aim to address the so-called ‘black box’ nature of large language models and help marketers understand factors such as brand sentiment and share of voice. For example, HubSpot’s new ‘AI Search Grader’ claims to help marketers ‘master a new craft: language model optimization’, by giving brand owners an at-a-glance view of how their brand is performing across AI search engines.
Ad-funded chatbots become reality
GenAI chatbot Perplexity, which now boasts approximately 10 million active monthly users, plans to run ads in Q4, according to a leaked advertiser pitch deck. Its new ad strategy will initially target 15 key categories, including health, technology, finance, arts and entertainment, and food and beverage. According to reports, nearly ‘two dozen brands and several top agencies’ are discussing Perplexity’s ad offerings with clients.
Coverage suggests marketers will be able to sponsor ‘related questions’ or run video ads in prominent positions. The platform will utilise a cost per thousand impressions model, with rates expected to exceed $50.
Separately, Perplexity announced in July that it would be launching a publisher revenue share program to give publishers an opportunity to make money through its search platform.
OpenAI launches new GenAI model, signalling new technological breakthroughs
OpenAI has unveiled a series of AI models, referred to as o1, which reportedly have new reasoning and problem-solving capabilities. The models will be available to ChatGPT’s paid subscribers and are said to mark a breakthrough in developing artificial general intelligence (AGI) – machines with human-level cognition.
However, concerns have been raised that the advanced capabilities come with an increased risk of misuse and deception. OpenAI has acknowledged they represent a ‘medium’ risk for issues related to chemical, biological and nuclear weapons, essentially improving the ability of individuals to create bioweapons.
This comes as other tech companies such as Google and Meta race to build systems that can be integrated into our working lives in an attempt to sell GenAI to business customers. So-called ‘AI agents’, go beyond current AI assistants (including the likes of Copilot) in that they can take actions on behalf of users.
Top EU privacy regulator opens investigation into Google
Google’s lead privacy regulator in Europe, the Irish Data Protection Commission, has launched a formal investigation into whether Google has sufficiently safeguarded EU users’ personal data in the development of its AI model, PaLM 2. The probe follows mounting concerns that Google’s practices may be falling short of the rigorous data protection laws in place across the region.
PaLM isn't alone in facing scrutiny. AI models such as Bard (Google), ChatGPT (OpenAI), and LLaMA (Meta) are also under the microscope in the EU and beyond. Authorities are questioning whether these companies have properly protected fundamental user rights during the development of their AI systems.
It may take several months before any final ruling emerges.
Please send across any tips, developments and interesting insights to g.robitaille@wfanet.org.