In this episode of The NewsWhip Pulse, Marshall Manson joins us from the Brunswick Group, where he leads digital teams across several markets in the UK, Europe, Africa, India, and more. A PR veteran, Marshall’s past roles include UK CEO of Ogilvy and EMEA Managing Director of Digital at Edelman. Prior to pursuing a career in communications, Marshall spent ten years working the US political campaign circuit.

In this conversation with NewsWhip CEO Paul Quigley, Marshall talks about the complexities of misinformation in the digital age — and how detractors often cocoon fake content with a kernel of truth, making the information even more challenging to combat. Watch the full video or read the transcript below.

 

Note: This transcript has been modified for clarity and brevity.

Paul Quigley: [begins at 01:09] So we’ve had the pleasure of being connected with Marshall as a partner for many years and we value his perspective because he brings unusual insight as a PR leader, with roots in political comms combined with as full a grasp of the new information ecosystem as I’ve seen anywhere. And he works with clients who must combat viral misinformation targeting their products, their goals, and their very permission to operate.

So our first section here, we wanted to define the problem a little bit and let’s start with that Marshall. So can you tell us how did misinformation first come onto your radar and how did dealing with it become part of your job as a communicator?

Marshall Manson: It’s a great question and first of all, let me say thanks so much for having me and thanks to everybody out there for joining us. We’re thrilled you’re here.

I think it’s probably obvious to everybody that misinformation is an emerging challenge. I think for those of us who’ve worked in digital for a long time, there’s always been a small element of dealing with misinformation. I mean it’s not grand insight to say that there are crazy things on the internet. But I think what we’ve seen in the last two or three years, do we want to attribute it to the rise of Trump, is that really it? I’m not sure. But the bottom line is over the last two or three years, we’ve seen misinformation affecting more and more corporate and organizational actors. So that’s been on the increase.

I think particularly in the period since last March when we’ve been dealing with COVID as a global pandemic, I think many of our clients and two or three in particular have been particularly and acutely affected by misinformation in a way that was disrupting either their ability to do business effectively or their ability to deliver on their objectives, their programming, in one of our clients who’s a non-commercial organization. And I think a couple of those really seeing starkly how the misinformation landscape has become so impactful and can be so impactful on those businesses and organizations.

Paul: I know you may not be able to reveal the client names, but how can misinformation impact ability to operate in the world?

Marshall: Well if you imagine on the commercial side, a huge amount of misinformation circulating about a product, or a service you’re talking about or offering – or even had research your company’s involved in – that can be hugely disruptive. The stuff that’s been going on the last couple of days with Reddit in the discussion about various stocks, that hasn’t been so much misinformation, although there has been some. But you can see how even just a kernel of misinformation there could lead to wild swings in stock price and therefore capital structure for companies, even really big ones.

If you’re a non-profit, or NGO, or an organization interested in furthering its mission around the world, you’re looking at this going, “All of this is potentially damaging our reputation and making it harder and harder for us to achieve and accomplish our mission.” And so all of those, it’s the thing that as communications people we want to talk about. It’s the nexus between reputation and business success. All of a sudden, misinformation becomes the thing that is preventing you from succeeding in whatever it is that you’re trying to do. That’s where misinformation becomes a strategic threat and we’ve had a number of examples in the last 12 months, but I think even longer, where we’ve seen that happen. My sense is that the frequency of that happening is increasing.

Paul: And this seems to be an emergent phenomena, Marshall. Do you think it’s got technology or social drivers behind it? There’s some visible manifestation there but I don’t think we can say the whole world revolved around that, right? There’s other drivers here.

Marshall: No, I think there’s other drivers, I agree with you. It is hard to be specific. There’s been a huge amount of academic work in this area done in the last couple of years. A lot of it is absolutely fascinating. And more, clearly, being done now.

I’ll leave it to the researchers to help us understand the details of the causes better. For our purposes, it’s enough to say that there’s a combination of deep polarization, technology that is allowing people to have stronger connections or seemingly stronger connections than ever before, and a stronger network of fact being exerted on one another. It’s sometimes in very small groups, but the smaller the group, the stronger that effect.

I think that combined with the technology acting as a lubricant for information to move even faster than it ever has, and you’ve kind of got the perfect storm. We can talk more about this later. A good friend of mine, Nicco Mele, wrote a book called “The End Of Big” where he talks about this. The idea that between technology and changes in society, things just get smaller and smaller and smaller. And the smaller things get, the more opportunity there is for misinformation also based on kernels of truth, which I know we’re going to talk about, to pop up and exert strong and damaging influence.

Paul: Yeah. So we’re definitely going to come back to kernels of truth and also maybe picking out some of the academic research you think is particularly noteworthy that people who are attending today might want to have a look at.

But what you’ve described as polarization, new connections, network effects, and these are all being lubricated by technologies. They’re happening faster. These are mainly digital phenomena that then emerge into the real world and have impact there.

Marshall: Yeah, clearly those aren’t the only forces. I think there are people who would argue that we have a different variety of media dynamics in some countries, clearly that’s an issue. There’s any number of other factors but I think clearly those are a couple of the major ones.

Paul: Before we move onto frameworks and solutions and thinking about it, just wanted [to go over a] definitional thing. Like isn’t misinformation just a perspective that you disagree with, Marshall? Trump used to call it fake news so how do you define it?

Marshall: No. I suppose one could have a debate about what I’m about to say although I’d kind of hope not. I’d like to think that in our society, we can still agree that there are some things that are simply facts. And that there are some things that are worthy of debate and merit debate and there are indeed many of those things. But at the heart of those debates, there should be facts we can agree on.

The trouble with misinformation is that misinformation generally speaking, at least the way I would define it and I think the common definition, misinformation are elements or items that are presented as fact that fundamentally aren’t. Now sometimes, those things are mistaken and sort of take on a life of their own. And I think sometimes they are pushed out on purpose but pushed out in good faith even if that good faith is misguided. And I think for me, there’s an important distinction here to make between misinformation and disinformation.

Disinformation, again, the definition, common definition as I understand it, is generally malicious and most often driven by a state or organized actor. And I think from our experiences with this over the last year, yes, there are clearly instances of disinformation happening where you’ve got a state or organized actor pushing something for malicious purposes.

But actually, the bulk of what we’re seeing and for the bulk of the most problematic things, don’t come in that form. They are not driven by the Russians or whatever. They are things that have come up in the common text of big issues or big topics. And in the context of COVID, through a lot of data work we’ve done in one or two situations, one of the things we’ve learned is that the COVID misinformation really does tend to start in the US and spread around the world. There are isolated exceptions to that and some quite interesting exceptions to that. But the COVID misinformation that many of us deal with has really been fueled by communities in the US then spreading their misinformation into related communities and other countries, other languages, that then take it around the world.

Paul: And so that’s a very good explanation of the problem. So let’s talk about some of the frameworks, some ways of thinking about how to understand it, how to explain it to clients, and how to decide on what is an appropriate action. For example, if a rumor was spreading in some Facebook groups that mouthwash could be an effective protector from contracting Covid and you’re dealing with an organization that has an interest in protecting public health. How do you get people’s attention and deal with complaints that maybe this is just some fringe crazies? You could use that example or just talk more generally. How do you quantify, understand, explain, and make decisions about what actions to take?

Marshall: Absolutely. For me, there’s three important steps to take. I think there’s been a huge amount of work both among firms that help us understand data and among academics and others, to understand the problem of misinformation and indeed, to provide insight for people like me in what’s happening, how it’s happening, why it’s happening, and so forth.

But the thing that I think we really wanted to try to set out to do, in 2019, was to understand what we could actually do – what action we could take once we developed some understanding of the situational landscape or narrative like the one you were just describing around mouthwash. So for me then, that takes us to the three steps.

The first step is, you’ve got to understand the narratives that are out there. Now, hopefully, for your business or your client base or whatever it might be, you’re only dealing with one at a time. But sadly … And just to be clear what I mean by narrative, your mouthwash example would be one narrative. So mouthwash cures COVID-19, just for the clarity, it doesn’t. That’s a misinformation narrative. Some of our clients have been dealing with multiple misinformation narratives, 10, 15, 20, 30, 50, 100. So the first step is, understand as best you can, what narratives are out there. And then make an assessment about them. And we’ve helped clients develop a framework for assessing the risk level for misinformation. And then we just categorize it, high, medium, low. And I can talk about the criteria for that if it’s helpful in a bit. So the first step is assess. Understand the risk level.

The second step is take a strategic view. So the strategic analogy that we’ve worked on and I have to credit a colleague of mine, Preston Golson, for coming up with this, is that of handling a forest fire or a brush fire. Now hopefully, nobody on this call’s ever been in a forest fire or brush fire. I can tell you it’s not any fun. But the way you manage a forest fire is you make hard choices about which bits of the forest you’re going to try to save and which bits of the forest you have to let burn. And in some cases, we even talk about forest management where you choose to let a little bit of forest burn. And there might be some cases like that here as well.

So the idea is you’ve assessed. You understand what the high risks are and you say, “Okay, some of the forest we’re just going to choose to cordon off. To build a fire break around. Dig a ditch and say that bit of forest, we just have to let that go.” But then there’s some of the forest that we have to fight over. And understanding the narratives, the risk level of those narratives, tells you which bits to focus on.

And so then the third step is, okay, we’ve assessed to understand risk level. We’ve made decisions about which bits of the forest we want to try to save, which bits of the forest we’re going to let burn. And so the third step is to intervene and start trying to put the fire out in those bits of the forest where you need to intervene and try to do something. While at the same time – it’s a bit of a two-phase or two-track process – building those fire breaks to protect the forest where the fire isn’t raging yet.

I could be more specific about that with an example. To pick up on your mouthwash point, if you’re a mouthwash company, then actually you might look at that and go, that’s not a fire that we want to get involved with or can get involved with. There’s nothing useful we can say. It’s not at an audience that we are concerned about. No action. But then the opposite is true if you are, for example, in that case, the WHO or a national health authority and millions of people are running out buying mouthwash. One, you’ve got to put that to rest. So there you’ve got to intervene, right?

Now we should talk about what intervention looks like, but you’ve got to intervene. Okay, you’ve made that strategic choice. Now you can work out how to shape the intervention.

Paul: And to complete the thought, if the WHO had been paying attention to this narrative in this situation, they would say, okay, there’s certain online communities where we don’t want to weigh it in here but what we need to do is firewall so we’re not going to try to convince people in the mouthwash-QAnon-group. We’re going to proactively however get some messages out in the media about this misconception that exists.

Marshall: It echoes a question we had in the chat about firebreaks. When and how do you build them? So let me talk about if you’re the WHO in the mouthwash situation first. Now it’s important to say, there are no perfect solutions to this but we’ve got to recognize a couple of things. The first is that misinformation tends to travel better or more seamlessly than facts do, right? We’ve heard the thing about light goes around the world before truth gets its pants on. So that’s the first thing to recognize.

And then the other element is that misinformation often comes to us in more ways, more forms, than facts. And that means that misinformation can appeal to us in an emotional or psychological level in a way that pure facts, pure rational messaging, often don’t. Which means that in order to combat misinformation one of the things you’ve got to think about is, how do I take my case to the world or to the audience, to the group we’re aiming at, in a way that appeals to them in more ways than just rational. Because rational will lose to psychological and emotional every time. All right? So we’ve got to have more forms of intervention, more ways that speak to emotional and psychological cues.

So what does that look like in practice? Well I’ll give you a couple of examples. One is, you could get a credible third party to go out and talk about the contrary view. A spokesperson who is credible on their own with a particular audience where you’re trying to intervene. The second is, we need to learn from the academic literature around how to do rebuttal well. One of the things we know, for example, is direct rebuttal doesn’t work. And that we shouldn’t be repeating back the misinformation or the false fact. So those are some of the dynamics we need to bring to play. There’s lots more.

Paul: I think that’s very useful insight. Can you elaborate just briefly on direct rebuttal doesn’t work? Because what I would say in response is yes it does.

Marshall: Well exactly, yeah. So the perfect example of this, and if you watch the media organizations in how they covered Trump, especially the last year. They spent a lot of time investing in understanding how to deal with the constant flow of mis-truth, right? And what you saw was, when they started to talk about something that the President said, they said, “Trump said this false thing.” And then they said what the false thing was.

So they’ve sort of pre-set the idea that what they’re about to say is false and that’s the way in. If you just say, Trump said whatever, and that’s false. That construction actually psychologically serves to reinforce Trump’s point rather than knock it down. Why that is, I don’t know. Maybe there’s a psychologist on the line who can tell us but we know from lots of research that that’s the case.

One of the most interesting confluence of disciplines at the moment is the confluence of behavioral science with communications because there’s so much that we as communicators, and I’ll put myself at the top of the list in this instance, have been doing for years that is received wisdom that is just dead wrong, actually. When you look at it through the lens of behavior science. And so we can keep working on that list of things as we go.

And then if it’s okay, I want to spend one second on firebreaks. So I kind of talked about fire prevention or putting out the fires. Now let’s talk about firebreaks or fire prevention is what I meant to say.

That’s about inoculating, word chosen carefully, your audiences that might be susceptible against future misinformation. So that’s about getting there, ideally first. Now there’s a bunch of stuff that’s important if you’re thinking about inoculation for misinformation. One is reputation. So we know that there is a relationship between familiarity with an organization or a spokesperson and susceptibility to misinformation which is to say the more familiar they are with you, the less likely they are to believe the nonsense. We know that there’s a relationship between how someone communicates and being part of a community. If you are seen as part of a community, you’re more likely to be trusted and believed. So that’s the value of third-party spokespeople. There’s lots of other dynamics like that. But getting there first, turns out to be the most important thing. So with a number of our clients, we’re talking about the necessity of actually doing reputation or brand-building work with audiences who in the past might have been secondary or tertiary focus for those businesses but actually now turn out to be important because they could be the next line of defense in the fight against misinformation.

Paul: So one of the things that we touched on is the idea of a kernel of truth being the most dangerous and difficult to deal with misinformation. So this is where the spark of your forest fire is something with a kernel of truth but gets twisted or augmented to strengthen a false narrative. Why is that more dangerous and how can you respond in those situations?

Marshall: It’s more dangerous because it makes the misinformation more credible. And it makes it harder to knock down because if you’re an intellectually honest actor or person, you have to acknowledge that there is that kernel of truth and the minute you’ve acknowledged the kernel of truth, the person who’s believed the misinformation goes, well if that’s true then it’s all true. Right?

And we see so many instances where the kernel of truth is the thing that makes a particular misinformation narrative believable and it helps fuel its spread. There have been instances we’ve seen around the vaccine in Africa where there were deplorable historical instances [involving medical experiments]. In Colonial times there were medical tests done on people. And I hate even talking about this but it’s true. It happened. It’s in the historical record, there’s no question about it. So now, when somebody says, “Oh yes, they’re back to do more medical tests on us” the difficulty of confronting that is magnified hugely by that historical past. That’s one kernel of truth example.

Another kernel of truth example is you could imagine a CEO or a business leader or organization leader saying something, particularly on video, that doesn’t even have to be particularly taken out of context. It’s an observation or it’s speaking to a different point or whatever and suddenly that sound byte becomes the kernel of truth that fuels a whole other narrative of misinformation. We’ve seen quite a lot of that sort of thing happening, particularly during the Covid period. Those things are incredibly dangerous and they’re hard to knock back because they enhance the credibility of the misinformation error.

Paul: So we’ve had two interesting questions come through but before I come to those, one last one from me. It’s about implementing the kind of forest fire blocker strategy and finding those breaks. What are those breaks? Those breaks between communities or are they breaks between narratives? Like there’s a bit of truth over here but we don’t want people to believe this bit of falsity?

Marshall: They’re typically not breaks between narratives. They’re typically breaks between communities. But that’s not always the case. And this is where, not wishing to make the webinar a commercial, but this is where getting some help can be really helpful. Because sorting through how are the conversations organizing themselves, where are the communities? And by the way, one of the things to be on the lookout for, over the last many years in the world of digital and social media, we have kind of come to default on a view of, well look me look at the conversation about my brand or my company or this issue. That doesn’t help us here. We need a much bigger map to understand how our brand, or this issue, fits into a wider construct of communities. How does the mouthwash example, how does the story about mouthwash, fit into the community of people talking about Trump and vaccines? And politics in Africa and what’s happening on the right in Germany. If you don’t understand those dynamics, if you’re not looking at the big picture, then you can’t understand where to put fire breaks.

And so actually, we need much bigger, more complex views of conversation in data, which actually, I might just use that as a way to get into Toby’s question. Toby asked how can you account for closed and dark social in the assessment phase that I talked about earlier. It’s a vitally important question and the answer is that there’s no good answer. Some of the folks we’ve worked with over the past year in terms of gathering data and assessing narratives and indeed communities, make an effort to collect data from the dark web and dark social as best they can. You get whatever you can and you put it in the analysis and do the best you can with it. But the reality, of course, is by definition dark social, walled gardens, like conversations in WhatsApp, so forth. By and large closed off to us. So we have to gather whatever information we can and then do our best.

One of the ways that we have been doing that, by the way, is instances where we’re dealing with multinational companies or organizations, we’ve been speaking to their people on the ground, and I appreciate if this is anecdotal and it slightly defies our increasing reliance on data and AI and complex mapping, but equally, what have you seen on your WhatsApp today and how many times is a perfectly valid question and at the moment, it might be the only way to get a sense of what’s happening, particularly in an obscure market some place where it’s hard to gather data.

Paul: Yeah. I was also very interested in that question as well. I think that is the most you can hope for with the dark web is these more anecdotal and pointed information. Like you look inside our Spike Dashboard yesterday on GameStop, you can see articles per hour as they’re being published, as they’re picking up engagement. The volume of Reddit commentary, the most engaged within Reddit. But these are all very structured sources. The social network APIs that are public are quite structured. It’s easy to quantify and see things. Once you get into message boards and the dark web, you’re finding little slices and quotes and hints and bits of pieces. Probably the best thing you can do is that more high-context hard work and kind of research.

And we have a question as well from William and I’m hoping you might understand the Dominion issues a little better than I. I’ve got a surface understanding of this and you’ve got I’d say a little bit more interest in this topic as well. Where would the fire breaks have been when a big chunk of the country thinks the election was being stolen. It’s unclear how widespread that view was but certainly polls indicate it was quite widespread.

Marshall: Yeah. And it’s interesting. This is a classic case where you’ve got two complimentary misinformation narratives at work. The overarching, the broad misinformation narrative is that the election was stolen. And then within that, for those of you who don’t know what William’s alluding to, you’ve got this relatively over-the-top, crazy misinformation narrative that Russians, it depends on which conspiracy theory you like, but Russians or somebody else have corrupted Dominions voting machines and essentially implanted election results in them that have then been added up.

The idea and the challenge of that is Dominion makes voting machines for a number of states and, in particular, a number of the states that Trump was disputing. Now, I’ve just committed an interesting error there. I’ve repeated the misinformation back although I think I said it was false off the top so I don’t know. We’ve got to do a better job of that.

Anyway, the interesting thing on Dominion. So William’s question was, what counter strategy would I have recommended? I have to say, I’ve only been following this from afar so there may be things about it that I’m not aware of or have missed. But I actually think Dominion’s counter strategy has been pretty effective. They have been running around using a legal strategy on the sources of the stories. And basically saying, you have committed defamation and the case that these variety of actors have committed defamation is pretty compelling. We’re going to bankrupt you unless you admit that you’ve done that and apologize. And even some of the most loony right-ring publishers in the US, have published things on their websites saying that all the stay aimed at Dominion was false.

So I think actually that’s been pretty effective. If the other element of the question is what firebreaks would have built if you were Dominion? Look, we need to hold our hands up and say the firebreak strategy doesn’t always work because sometimes misinformation just happens to you. I mean I don’t think anybody could have reasonably predicted a month out from the election that anybody was going to come back and say, yes, Dominion voting machines have been corrupted by the Russians. That’s not a reasonable thing to anticipate. So you can’t inoculate against that. Other than, as I think Dominion’s done, with their key stakeholders, built an incredibly robust reputation for building and delivering great quality products and being a good partner. That’s about all you can do in terms of inoculation in that instance. Then post being victimized by the narrative, you just have to go out and knock it down one bit at a time.

Paul: It did come out in front of it – the defamation suits certainly seem to be having an impact.

Marshall: They sued somebody on Friday for $1.3 billion who definitely doesn’t have $1.3 billion dollars. I can’t remember who it was but I’m really enjoying that. Yes, it was Rudy Giuliani. They sued Rudy Giuliani for $1.3 billion. I can’t argue with that strategy and part of me, as someone who loves the US where I grew up, hopes that they are successful.

Paul: That’s a high note. We’re over time, Marshall, and we didn’t get to group susceptibility to misinformation, some other topics. Maybe we’ll be talking about this again. The topic is not going away. Thanks for joining and thanks to all of our guests for making it along for today’s discussion. We’ll be getting this up on video next week. If you have any colleagues you think might find some of this interesting or useful.

Marshall: Absolutely. My pleasure, Paul. Thanks for having me.