GARM: Small steps and big leaps in pursuit of online safety
Rob Rakowitz, Initiative Lead for WFA's Global Alliance for Responsible Media, shares on the priorities of the cross-industry group formed at Cannes 2019 to address brand safety concerns
Share this post
We launched GARM in late June of last year, and in the months since then we’ve made significant progress in readying the industry to create a better system of controls and frameworks to dramatically reduce harmful content.
We are looking to eliminate its monetization and effectively reduce its availability in the interest of consumers and society.
GARM now gathers over 40 brand owners, six agency holding companies, seven major media platforms and seven industry groups and brings together 110 of the brightest media experts who are committed to our goals of uncommon collaboration towards this pursuit.
I’ve been so fortunate to spend time with members in the community and experts in the space and it’s helped to build a fact-based approach to creating this alliance. Three major themes helped to crystalize our current focus:
- The pace of change exacerbates low visibility: Online content moves at the pace of culture. Because of this, important actors like the digital media platforms revise their policies as frequently as every 14 days. Marketers and agencies and the tech enablers already have limited visibility in properly classifying content that is brand suitable. Keeping pace with this change is hard, especially when considering each platform and market takes a different approach.
- High levels of customization require more collaboration: One of the top pain points we identified in our work was the high levels of customization required of brands, agencies, and platform partners. This may be the difference in how two brands from the same company view something like comedy or may be how the same brand approaches cultural sensibilities in two different markets. When we extrapolate this to the number of brands, number of markets, and number of platforms it’s easy to see how the work can be overwhelming for teams.
- Multidisciplinary implementation: 98% of the marketers we consulted empower their media leaders to define brand safety guidelines. However, upon closer inspection, roughly two-thirds of those marketers rely on other disciplines like corporate affairs, legal, and procurement to help take policy into action. Reducing harmful content online and protecting brand investment requires multidisciplinary implementation.
For these three reasons, we’ve decided to prioritize our activities in three core areas:
- Developing a common way for defining and categorizing harmful and sensitive content: Making progress towards reducing the money flow to bad actors and harmful content requires shared definitions and vocabulary. This is a vital and critical first step in our efforts.
- Creating tools that give participants more visibility and control to eliminate harmful content from media campaigns and advertising supported channels: The industry will be unable to move at scale to working against bad actors or realizing the potential for brands and consumers unless it creates common tools and frameworks that can work across the media supply chain, geographies, and platforms.
- Establishing shared measurement standards and oversight to drive collective accountability: Right now, we have no consistent way to measure the safety of our operations – whether you’re an advertiser, and agency, or a platform. We want to create the framework to assess our individual safety and how we progress that over time. Adopting key measures and agreeing to independent verification will be key to driving continuous improvement.
We’re excited and inspired by the steps we’ve been making since our launch in Cannes, and we’re so humbled to be associated with the World Economic Forum. In our time together, we’ve been able to unite the right people and have focused on leverage points to scale existing work and be bold and creative in developing new work to improve safety and sustainability for digital media.