Get analysis, insight & opinions from the world's top marketers.
Sign up to our newsletter.
No new technology comes without risks and Generative AI is no exception. Brands need to be alert to the challenges if they want to take advantage of the opportunities, says WFA Associate Director, Digital Policy, Gabrielle Robitaille.

AI is part of our lives, as citizens and as marketers. Our challenge is to make sure it’s used for good. There are opportunities for brands as well as novel challenges. None of these are insurmountable, but they do require CMOs to take active steps to mitigate the legal, ethical and reputational risks.
Such action is crucial to enabling brands to better harness the potential of the technology without compromising on trust, safety and integrity that consumers demand.
That’s why the WFA is launching a Generative AI Primer, designed to equip brand leaders with the knowledge they need to better understand the opportunities and challenges of generative AI.
In the coming weeks, WFA will also be launching an AI Task Force, bringing together senior marketing, legal and policy professionals to help brands develop practical solutions to propel safe and suitable use of AI across the industry. There will be more details on that initiative soon.
Before we tackle the details of the generative AI challenge, it’s worth reflecting on the rapid growth of AI and why it should be high on CMO’s to do lists. The release of OpenAI’s chatbot ChatGPT in November 2022 resulted in the fastest adoption in human history of a consumer technology, amassing more than 100 million users within the first two months.
Since then, the general availability of generative AI tools capable of writing text, composing music, creating art and more has demonstrated its potential to revolutionise any industry where creativity is key.
Our research has found that three in four of the world’s largest brands are already using generative AI in their marketing or are planning on doing so soon.
From content creation to personalised customer experience, search engine optimisation and product innovation, generative AI is poised to play a significant role in driving marketing creativity, effectiveness and efficiencies.
Nevertheless, generative AI’s impact on marketing (let alone society more broadly) is yet to be fully understood and its use has already raised legal, reputational and ethical challenges for brands.
So while major brands are optimistic about the potential of AI to drive business growth, over 50% of them are also extremely concerned about the risks of the technology when it comes to intellectual property and copyright, privacy and brand safety. This is accompanied by a lack of proper understanding of how these challenges can be addressed.
WFA Primer: opportunities and challenges in generative AI seeks to provide an overview of generative AI and how brands can use it safely. It puts forward a framework for assessing risk across the entire use cycle of a generative AI tool, categorising risks into five buckets:
Critically, it also offers potential solutions for both brands and AI providers to the six priority risk areas identified by our members: IP and copyright, data protection and privacy, company confidentiality, reliability, safety and integrity, and diversity, equity and inclusion as well as the broader societal considerations.
For example, some of the biggest challenges brands face when using generative AI is ensuring that the outputs they generate don’t inadvertently replicate existing works, contain personal data of individuals without the necessary permissions, include harmful content such as disinformation, or reinforce harmful stereotypes.
Mitigating these risks might involve the following steps for brands:
Generative AI providers and platforms hosting AI-generated content should also take steps to ensure that their services are not misused for purposes that could be harmful.
This is a rapidly changing area and new challenges and opportunities are emerging for our members every day. That’s why we’ll keep working with both brands and the wider industry to continue developing practical guidance and solutions.
The goal for everyone in marketing should be to create an AI framework that is both safe for brands and suitable for society.
For more information or questions, please contact Gabrielle Robitaille at G.robitaille@wfanet.org