This month in AI: new tools, content labelling, voice cloning and licensing deals
In today’s rapidly changing digital and technological landscape, keeping pace with the latest trends can be challenging. WFA’s monthly update serves as your one-stop round up of the most important developments impacting your use of AI, to help you stay informed of the forces transforming the marketing industry today.
Share this post
Join our AI Community if you'd like to gain practical insights into how to leverage the potential of AI in an effective, safe and responsible way.
Platforms roll out new AI tools for brands
TikTok has become the latest tech company to incorporate generative AI into its ad business, announcing the launch of ‘TikTok Symphony’ AI suite. The new features aim to assist marketers with writing scripts, producing videos and enhancing current assets as well as determining the best creative and audience for their campaigns.
Meta and Google both also announced similar AI-powered features, unveiling new tools that enable advertisers to generate images and text for branded content while reflecting brand’s ‘unique voice and tone’. Many of these new tools are still in trial phase and will likely be rolled out over the course of the year.
These developments are part of a drive by ad-funded platforms to boost their own ad business. WFA is currently coordinating with the online platforms to organise sessions exploring these new tools and their implications for brands.
Labelling of AI-generated content gets real
OpenAI has joined the Coalition for Content Provenance and Authenticity (C2PA), an industry initiative launched by Adobe, Arm, Intel, Microsoft and others aimed at developing technical standards for labelling and certifying AI-generated content and addressing misleading information online.
The company also announced that any content created through its video generator Sora will be labelled as AI-generated and unveiled a new model to predict whether an image originates from its image generator, DALL-E 3.
As part of its own partnership with C2PA, TikTok announced it would also be labelling any AI-generated content uploaded from outside sources automatically.
Contributing to the development of content provenance solutions forms part of the recommendations for AI developers put forward in WFA’s ‘Generative AI Primer’, as these will support brands’ efforts to ensure their ads aren’t inadvertently funding AI-generated harmful content. WFA’s Global Alliance for Responsible Media (GARM) is currently surveying online platforms to discover more about their approaches in this area. Results are due in the next couple of months.
AI developers prioritise licensing deals with news publishers
A number of major news corporations including The Financial Times and News Corp (owner of The Wall Street Journal and The New York post) have announced licensing deals with OpenAI, giving the AI developer permission to use content from their publications to train its models. These partnerships will also enable ChatGPT users to see attributed summaries, quotes and links to content from these publications in response to relevant queries. Reports suggest that Google and Apple are exploring similar AI deals.
These deals come in the context of several ongoing lawsuits filed by news publishers against AI developers, accusing them of unlawfully using journalists’ work to train their models.
As generative AI tools are prone to producing outputs that resemble or replicate existing works, WFA research found that over 55% of global brands are very concerned about copyright and intellectual property risks. These licensing deals may reduce the risks of brands being held responsible for any inadvertent third-party copyright infringements, but there is still much concern on how marketers can ensure they’re not creating and using AI-generated outputs which replicate existing creative works.
AI voice cloning debacle
OpenAI launched its GPT-4o, a new version of its chatbot which generates a combination of text, audio and image outputs. However, in response to a live demonstration of one of its new voice assistants (known as ‘Sky’), commentators were quick to draw comparisons between Sky and actress Scarlett Johansson. The actress subsequently accused the company of deliberately copying her voice. OpenAI has removed the particular voice option from ChatGPT, although it has denied any connection between the two.
This development highlights the risk that individuals find their likeness replicated or ‘deep faked’ in AI-generated content. For brands, this may mean the use of synthetic people or voices in ad creative may unintentionally feature or replicate real people’s image or voices, without their knowledge or permission.
A recent WFA benchmark found that 50% of global brands surveyed have restrictions on the use of AI-generated voice in marketing creative. In the coming weeks, WFA will launch workshops to develop practical guidance on the responsible use of generative AI across key marketing use cases, including AI-generated synthetic people and voices.
US States of Utah and Colorado enact first comprehensive AI bills
The US States of Utah and Colorado have become the first to pass comprehensive laws regulating the use of AI, echoing many obligations found in the EU AI Act.
Utah’s new AI transparency law requires companies to disclose when consumers interact with their AI systems, and clarifies that companies are responsible for the actions of the generative AI tools they use. Colorado’s Artificial Intelligence Act seeks to govern high-risk AI systems and introduces obligations requiring developers and users of such systems to protect consumers from AI harmful discrimination.
The next AI Community meeting, taking place on 25 June, will explore what these regulatory developments mean for marketers and how brands are preparing. You can register for the meeting here.
Please send across any tips, developments and interesting insights to me at g.robitaille@wfanet.org.