AI Essentials: avatars spark DEI debates, brand fakery and EU investigates prohibited uses

AI Essentials: avatars spark DEI debates, brand fakery and EU investigates prohibited uses

3 minute read

AI is transforming the way marketers work. Gabrielle Robitaille, Associate Director, Digital Policy at WFA, looks at the latest developments and how they impact insights, customer communication, video generation, copyright and disinformation. 

Article details

  • Author:Gabrielle Robitaille
    Policy Director, WFA
Opinions
25 November 2024
AI Community visual grey

AI avatars trigger debates around bias and discrimination

Earlier this month fashion company Mango launched a new campaign generated ‘entirely by Generative AI’. The ad, which featured AI avatars modelling clothes, attracted significant backlash from the public, with consumers claiming that the company had wrongfully replaced human models and creators with AI.

Many were quick to point that the campaign was perpetuating unrealistic beauty standards, with avatars depicting unrepresentative body types. Some even argued that the campaign constituted ‘false advertising’, calling on ad self-regulatory authorities to provide clarity on how the principles of truthfulness and honesty apply when AI is used in advertising.  

In response to the backlash, WFA research revealed that nine out of 10 companies are ‘concerned’ or ‘very concerned’ about the risk of bias and discrimination in Gen AI use for marketing purposes, particularly for content ideation and creation purposes. This is why 50% of brands we surveyed now have restrictions in place on the use of AI-generated models in marketing creative.

Brands subject to AI ad ‘deepfakes’

In recent months, a number of brands have been the subject of mock AI-generated ad ‘experiments’.  The latest example involves Dior, where generative AI was used to create an ad depicting the brand’s products, logos and even famous people without the their permission.  

While intended to showcase AI’s creative potential, such examples continue to underscore the risks brands face in protecting their identities in a rapidly evolving technological landscape.

In response, some agencies are starting to develop AI detection tools to help brands monitor ‘deepfakes’. For example, Publicis-owned ad agency Team One recently launch Faikcheck, a tool for ‘quickly detecting whether content is real or a product of AI’.

OpenAI launches real-time search for ChatGPT

OpenAI has released its new ChatGPT search function, which now enables users to access up-to-date real-time information, with links to relevance sources, directly on the company’s website and via its Android and iOS apps.

Boasting 200 million weekly active users worldwide, ChatGPT search represents a new rival to Google’s domination of search. Google already offers an AI Overviews feature, which incorporates generative AI responses to users’ search queries, in addition to the normal list of results.

EU opens for consultation on what AI use cases should be prohibited

In May, the EU adopted the AI Act, the world’s first comprehensive law regulating AI, including generative AI. While the law does not regulate marketing specifically, it introduces a risk-based approach, prohibiting certain AI practices where they are judged to undermine people’s safety and fundamental rights. This includes practices such as social scoring, manipulating human behaviour or exploiting people’s vulnerabilities.

To provide clarity on which particular use cases fall under the scope of these ‘prohibited’ practices, the EU is inviting stakeholders, including academia, civil society representatives and businesses to submit their input. These will feed into future regulatory guidelines.

While marketing is unlikely to be the focus of regulators, WFA will monitor developments closely and report back on any relevant developments.

Please send across any tips, developments and interesting insights to g.robitaille@wfanet.org.

Article details

  • Author:Gabrielle Robitaille
    Policy Director, WFA
Opinions
25 November 2024

Contact us