DAVOS — The Global Alliance for Responsible Media — a coalition of marketing and advertising leaders, businesses and media formed last June in Cannes — has unveiled the first steps in a plan to address harmful content.

“(The digital world) provides lots of great possible things for consumers and society and the marketing industry. But at the same time there are some issues and unintended consequences that we as an industry need to tackle,” said Luis Di Como, Unilever’s executive VP of global media.

Unilever is one of the founding members of the organization, which also features Bayer, Microsoft, Johnson & Johnson, Mastercard and Procter & Gamble; agency groups IPG, Omnicom, and Publicis; media companies Facebook, Google and Twitter; and trade associations ANA and Mobile Marketing Association.

While in Davos, Di Como talked to us about Unilever's efforts to turn harmful internet content into an industry issue. In early 2018, the company famously threatened to pull its ads off of Google and Facebook unless they cleaned things up.  

What spurred the creation of the Global Alliance at this particular point in time? 

Unilever in 2018 pioneered a digital responsibility framework around responsible content, responsible platforms, responsible infrastructure and issues about monetization, content guidelines, and for kids and data privacy. We’ve been doing this for more than two years but we realize our individual efforts will not take us to the next level. That’s why we partnered and are a founding member of the Global Alliance for Responsible Media. We have ambitions to remove harmful content and create a safe environment for brands and society.

That’s a very big challenge. How do you envision doing that? 

We are working relentlessly in collaboration across all the members of the industry — platforms, companies, agencies and trade associations and now the World Economic Forum to bring together the stakeholders to create a digital consistency that is good for brands and society. What we are announcing are the next steps in a concrete and substantial plan:

  1. Shared definitions. There are multiple ways that every advertiser, every agency, every marketer defines. We categorized it in 11 key areas including explicit content, drugs, spam and terrorism. This will enable platforms, agencies and advertisers to have a shared understanding of what is harmful content and how to protect vulnerable audiences, such as children. Nobody wants to have it — not the platforms, not the advertisers. Having clear definitions of what it is gives you (a common way to identify it.).

  2. Creating common tools and systems. Developing tools and systems will allow us to create better linkage across advertisers, and media agencies and platforms to make sure none of this content is being monetized. We need to be able to have this .. that allows us to ensure that our brands exist in the contexts (we approve of).

  3. Independent oversight. We mention, between last July and September, more than 600 million pieces of harmful content were on platforms (YouTube, Google and Facebook) and platforms are removing it. But it still goes through and it means that every second someone is watching harmful content. There are self-declarations from the platforms, but we need to have independent oversight of the strictest levels.

Is your goal achievable? 

By introducing common definitions, sharing tools and systems and independent oversight, we are confident that we will remove most of the harmful content online. We know the challenges ahead. But this is an efficient way to tackle this problem for the industry and, more importantly, for the people we serve. Kids and others are exposed to this content, and we need to ensure that we provide them the right environment.

What is government’s role in this? 

We are engaged regularly with government. We are having conversations informing them on how we classify harmful content. The distinction between regulation and how we are working is that, for us, the work is using the power of advertising. We have advertisers, agencies and media companies and trade associations, and also have the World Economic Forum, to drive the system change in order to create a safer environment.

But doing all that requires having the platforms onboard, too. How do you make that happen? 

Our approach is not to threaten them because we truly believe the way to solve this problem is to propose solutions and bring constructive approaches. Millions of pieces of content have been removed, accounts have been deleted.  Much more collaboration among the platforms when they see harmful content. I think we have the mutual responsibility not just to do it. It’s also how we can be part of the solution. The digital revolution provides plenty of benefits. We just need to be conscious and responsive to the negatives and pitfall of bad actors taking advantage.