Tackling the nitty gritty of Generative AI
No new technology comes without risks and Generative AI is no exception. Brands need to be alert to the challenges if they want to take advantage of the opportunities, says WFA Associate Director, Digital Policy, Gabrielle Robitaille.
Share this post
WFA members can download the document here. Please note that this document is WFA member content only. If you'd like to know more about WFA membership, please contact membership@wfanet.org.
AI is part of our lives, as citizens and as marketers. Our challenge is to make sure it’s used for good. There are opportunities for brands as well as novel challenges. None of these are insurmountable, but they do require CMOs to take active steps to mitigate the legal, ethical and reputational risks.
Such action is crucial to enabling brands to better harness the potential of the technology without compromising on trust, safety and integrity that consumers demand.
That’s why the WFA is launching a Generative AI Primer, designed to equip brand leaders with the knowledge they need to better understand the opportunities and challenges of generative AI.
In the coming weeks, WFA will also be launching an AI Task Force, bringing together senior marketing, legal and policy professionals to help brands develop practical solutions to propel safe and suitable use of AI across the industry. There will be more details on that initiative soon.
Before we tackle the details of the generative AI challenge, it’s worth reflecting on the rapid growth of AI and why it should be high on CMO’s to do lists. The release of OpenAI’s chatbot ChatGPT in November 2022 resulted in the fastest adoption in human history of a consumer technology, amassing more than 100 million users within the first two months.
Since then, the general availability of generative AI tools capable of writing text, composing music, creating art and more has demonstrated its potential to revolutionise any industry where creativity is key.
Our research has found that three in four of the world’s largest brands are already using generative AI in their marketing or are planning on doing so soon.
From content creation to personalised customer experience, search engine optimisation and product innovation, generative AI is poised to play a significant role in driving marketing creativity, effectiveness and efficiencies.
Nevertheless, generative AI’s impact on marketing (let alone society more broadly) is yet to be fully understood and its use has already raised legal, reputational and ethical challenges for brands.
So while major brands are optimistic about the potential of AI to drive business growth, over 50% of them are also extremely concerned about the risks of the technology when it comes to intellectual property and copyright, privacy and brand safety. This is accompanied by a lack of proper understanding of how these challenges can be addressed.
WFA Primer: opportunities and challenges in generative AI seeks to provide an overview of generative AI and how brands can use it safely. It puts forward a framework for assessing risk across the entire use cycle of a generative AI tool, categorising risks into five buckets:
- The risk from the use case.
- The risk from the information sources upon which the model is trained.
- The risk from algorithmic processing arrangements and how the tool takes decisions.
- The risks for end users (such as brands) who are engaging with the tool and want to keep their own information secure.
- The risks from the outputs, which may inadvertently create work that harms both the brand and society.
Critically, it also offers potential solutions for both brands and AI providers to the six priority risk areas identified by our members: IP and copyright, data protection and privacy, company confidentiality, reliability, safety and integrity, and diversity, equity and inclusion as well as the broader societal considerations.
For example, some of the biggest challenges brands face when using generative AI is ensuring that the outputs they generate don’t inadvertently replicate existing works, contain personal data of individuals without the necessary permissions, include harmful content such as disinformation, or reinforce harmful stereotypes.
Mitigating these risks might involve the following steps for brands:
- Avoiding the use of generative AI to create synthetic humans or ‘deepfakes’ which replicate real people.
- Seeking clarity on how the AI tool provider plans on further using the outputs they generate so that brands maintain control over how these assets are used in the future.
- Ensuring human review of outputs to identify whether any assets resemble existing copyright or IP-protected works and whether they contain personal data.
- Identifying and removing any content or elements depicting or reinforcing gender, racial, ethnic and cultural stereotypes.
- Supporting and participating in industry initiatives such as the Global Alliance for Responsible Media, which is working to prevent advertising revenues from funding harmful AI-generated content.
Generative AI providers and platforms hosting AI-generated content should also take steps to ensure that their services are not misused for purposes that could be harmful.
The Primer builds on the launch of WFA’s GARM: Safe and Suitable Innovation Guide in June 2023. This also highlights some of the brand safety risks advertisers face in the context of generative AI as is part of our commitment to helping brands tackle the challenges of innovation and new technology.
This is a rapidly changing area and new challenges and opportunities are emerging for our members every day. That’s why we’ll keep working with both brands and the wider industry to continue developing practical guidance and solutions.
The goal for everyone in marketing should be to create an AI framework that is both safe for brands and suitable for society.