The new agreement on harmful content in the digital space achieved within the Global Alliance for Responsible Media highlights the need for action from brands as well as platforms. It’s not simple, however, and our five industry experts describe the next steps for brands, ranging from ensuring that the same standards are applied to all the media partners they invest with, through to an emphasis on ensuring brand safety efforts do not lead to discrimination against certain sections of the population.
Share this post
Clockwise from top left: Tina Beuchler (Nestlé), Gerry D'Angelo (P&G), Jonathan A. Greenblatt (ADL), Christopher Kenna (Brand Advance), Belinda Smith (formerly EA)
Belinda Smith, global media expert, former Electronic Arts media and marketing lead:
“Whatever happened to good, old, brand safety? We used to flock to premium publishers, who plan their content well in advance, to ensure our brands were showing up in all the right places. We had no room to be caught by surprise.
During the digital era we lost that golden rule. Chasing audiences got very dangerous and now we’ve rightly thrown our collective heft at cleaning up the internet.
But whatever happened to good, old, brand safety? In our hunger to bash the heads of digital publishers, we’ve given the broadcasters and their peers from that era a free pass. If an internet site or service, where users upload every stream-of-consciousness thought that comes to their minds, can sit at the table and agree to collectively defining harmful and sensitive content and protect my brand from appearing next to it, then why can’t all publishers do this?
Why would we let someone get on a talk show or the news or reality TV and spew hate and vitriol and misinformation without pulling our ad dollars? We would not extend this luxury to our digital friends. Whatever happened to good, old, brand safety? It’s time we brought it back.”
Gerry D’Angelo, Vice President, Global Media, Procter & Gamble:
“Our industry spends more than $600 billion in media and is facing multiple challenges – a global pandemic, economic disruption, social unrest and climate change.
With more time at home, technology and e-commerce trends are accelerating and there may never be a better time for us to lead constructive disruption and transform media into a powerful force for good and force for growth.
P&G is challenging itself and the industry in a number of areas including – equal representation in the media supply chain, elimination of systemic inequalities in the media supply chain, accurate portrayal all humanity in all media, a transparent and accountable media supply chain, joining forces to create content for good and elimination of hateful content in media.
There is still too much hateful, denigrating, and discriminatory content in media and P&G continuously reviews all media on which we advertise – television, radio, print, and online – to ensure our brands are not on or near harmful content.
The Global Alliance for Responsible Media has made progress by establishing common definitions and common metrics for reporting harmful content, verified by an independent third party. Now we expect to see progress being made, not just by two platforms but by the industry as a whole.”
Tina Beuchler, Global Head of Media & Agency Operations, Nestlé:
“At this difficult moment, it is more important than ever to ensure safe content for people who are spending more time in front of their screens. Nestlé markets more than 2000 brands across 187 countries.
If brands can help build a safer internet, one free of disinformation and other harmful content, not only will it be safer for users, it will help build trust in the platform and ultimately create a more effective marketing environment for advertisers.
No one can do this alone. Nestlé became one of the first advertisers to join WFA/GARM, recognizing that the huge task of creating a more responsible environment for consumers can only be accomplished by working together as an industry to create common standards for a safer more responsible internet environment.
Together, the now hundred plus GARM members, including our own media agencies such as WPP who work hand in hand with us on driving GARM, have advanced standards in the definition of harmful content, measurement verification and education. Nestlé acknowledges the progress but will push for continued improvement across all media.”
Christopher Kenna, CEO and Founder, Brand Advance:
“GARM have been making some great steps towards ensuring that online content is properly monitored and there is still a way to go. Ensuring that content is not harmful is one part of the battle, but so is ensuring that advertisements are inclusive and culturally sensitive. When reaching diverse audiences, it’s crucial that brands know how to communicate in an authentic and sensitive way with their creatives and messaging. At Brand Advance and its sister companies, we are experiencing an increased demand in council to ensure this is done authentically. This can be achieved by creating walled gardens of safe publishers so we can remove strict brand safety barriers and allow better reach within contextual diverse media.This was adopted fantastically by Diageo with the Trusted Marketplace.
Independent third party verification processes are key to ensuring that online content is safe. When looking for standardised definitions of what makes content harmful, we must consider how these definitions vary within all demographics - the word ‘gay’ might be considered inappropriate by some brands on their blocklists, but it’s pretty run of the mill for LGBTQ+ media! Our Sentiment AI technology allows for contextual and conceptual tagging that avoids blunt keyword blocking by using language learning to assess in detail whether online content is appropriate or not. This, combined with verification of ad creatives’ cultural authenticity, provides a two-pronged approach on both definitions of unsafe online content and inappropriate ad creatives.”
Jonathan A. Greenblatt, CEO and National Director of the ADL (Anti-Defamation League):
“This summer, we reached a tipping point in the long struggle to convince social media companies to take responsibility for the hate and extremism that has festered on their platforms. With tens of thousands of Americans joining in racial justice protests across the country, with election misinformation spreading like wildfire and with violent extremists emboldened, the Stop Hate for Profit movement was born.
When we called for support, 1,100+ advertisers signed on to a month-long Facebook ad pause. Hundreds of celebrities and influencers then followed suit with an Instagram freeze that reached more than 1.8 billion users. We counseled Congress ahead of a landmark hearing on antitrust issues and helped them to interrogate the executives effectively.
All of this signaled that advertisers, consumers and regulators are fed up with the lack of progress. The changes you are seeing now, such as Facebook’s sudden about-face on Holocaust denial and Facebook, YouTube and Twitter agreeing to adopt GARM’s common framework for defining harmful content, aren’t happening in a vacuum. They are a direct result of pressure.
Imagine if every advertiser was to condition ad buys on strict adherence to a broad anti-hate framework. This kind of unified approach could make our communities safe and respectful for all people.”