Our volatile, ever-evolving digital world is seeing a fascinating (yet highly dangerous) phenomenon: Deepfakes. Many of those responsible for disseminating insidious fake news have reached a new level in their efforts to distort reality. Thanks to them, you can now wake up to a video of yourself saying or doing something that never actually happened. 

At their core, deepfakes are a form of forgery, an audiovisual manipulation that can make any face appear to say or do anything. When this kind of deceptive content becomes indistinguishable from reality, it forces us to go against the foundational underpinnings of the law, as well as our own natures. What makes deepfakes more sophisticated than garden-variety fake news is its biological roots. When interpreting reality, sight is human beings’ most important sense, so when we see something fabricated, we are exponentially more likely to be fooled. In short, humans are naturally programmed to trust what we see.


The technological progress that enables these forgeries originally came from Hollywood. The same technological wonders that turned Brad Pitt into an old man in “The Curious Case of Benjamin Button’’ and transformed Sam Worthington into a Na’vi alien in blockbuster “Avatar” also formed the foundations a Reddit user built off of years later. This user shared code people could use to seamlessly superimpose an image of a face over someone else’s body in a video. 

The first high-profile victims of deepfakes were, somewhat ironically, Hollywood stars. Malicious parties disseminated deepfake content using famous actresses’ faces to porn sites, a forgery sophisticated enough that challenging its veracity proved very difficult. More recently, U.S. Speaker of the House of Representatives Nancy Pelosi was targeted by a misinformation attack that featured a doctored video, allegedly of her speaking during an 2 Deepfakes: Reputation’s Next Big Threat ideas.llorenteycuenca.com 3 Deepfakes: Reputation’s Next Big Threat interview while inebriated. The video was subtle, technically advanced and designed specifically to damage her image, warning us all of deepfakes’ high threat potential.


Deepfakes have immeasurable potential to do damage. While it may be possible to convince many not to believe these deceptive videos in the long term, deepfakes’ damage can be extremely front-loaded, especially when used with strategic timing. In the political arena, a fake video can shift the tide of a tight election if it is released the day before a vote. In the corporate arena, a deepfake can cause a company’s stock to suddenly and strategically plummet. The reality is, viral online content moves faster than any of our response capabilities. 

The most vulnerable companies are those with prominent leaders connected to their organizations’ reputations. If a deepfake video is created to paint a well-known CEO as racist, misogynous or intolerant, it could reach farther than any damage control efforts, leaving a tenacious stain on both the individual and the company as a whole. Recently, Elon Musk made a tweet announcing his repurchase of stocks, which wrought havoc in the stock market and affected thousands of investors worldwide. This debacle ultimately cost Tesla and Musk $20 million in fines. One might wonder, how would such a crisis play out if it were sparked by a fake video online? 

Deepfakes’ unpredictable nature has led Facebook and Microsoft to launch a joint program in connection with the Partnership on Artificial Intelligence to determine the best way to identify and combat deepfakes. During the United States pre-election period, these technology powerhouses began developing countermeasures in response to public pressure. Additionally, there is a further, ethical layer to this issue: We must ask ourselves to what extent we share responsibility for asserting control over what content is published or shared.

Until sufficient countermeasures are found, deepfakes remain a constant threat, and organizations must be prepared to mitigate any damage caused by sudden misinformation campaigns. We must all be ready to respond with no warning. Recently, Hao Li, a computer science professor at the University of Southern California and forerunner on the topic of deepfakes, delivered a disturbing prognosis during an interview with CNBN: in less than a year, any person will be able to create “perfectly real” videos and images, with no clear way to detect deepfakes. One symptom of this can be seen in the Chinese app Zao.

As is often the case, most companies’ first instinct is to fight fire with fire. Cybersecurity companies, such as Symantec, have explored the use of artificial intelligence to detect deepfake videos by reverse engineering their creation processes. We are also seeing the emergence of new startups, such as ZeroFOX, Truepic and Proofmade, all of which specialize in monitoring, verification and social media protection with the end goal of guaranteeing real content. This burgeoning industry has expanded to the point where high-quality services can cost millions of dollars per year. 

The recurring problem with deepfakes is that they continue to evolve, always staying one step ahead of the solution. So, what can we do?


Until recently, communication and crisis management teams had a simple mantra they used to fight against lies: Control the narrative before the narrative controls you. 

However, we can no longer rely on the ancient Aristotelian model of sender-message-receiver. This idea served as a guideline for communicating the sender’s message faithfully to the receiver, but in today’s world, it is no longer applicable. In our age of deepfakes, Aristotle’s model has finally crumbled. We must now ask ourselves how we will seize control of the narrative when the receiver will not necessarily be able to tell the sender from a deceiver.

 Today’s corporate world is ill-prepared at best (and completely unprepared at worst) for these kinds of scenarios. Committees and political advisors, even those at the forefront of innovation, have barely scratched the surface of this topic. This complex and volatile field will serve as a harsh learning experience for anyone relying solely on pre-deepfake crisis management systems. 

Under this new paradigm, our already VUCA context is taken to the extreme. As strategists, this forces us to develop new tools and methods to offset potential damage to companies, public figures and organizations. Below are some essential elements for an appropriate modern action plan:

  • Know Your Enemy. The first step is comprehension. How does fake news go viral? What policies do Facebook, Twitter and YouTube have in place to combat this? Can one directly request a video be removed from a platform? What are the legal implications of such a request? What audiences are most susceptible to this information?
  • Understand Your Weak Points. What personal or organizational areas could a deepfake news creator leverage to attack your reputation? Where would misinformation campaigns targeting these points originate? Is your reputation credible enough that you could believably deny any misrepresentations? A thorough self-diagnosis is essential for threat anticipation and prompt response.
  • Select a Team to Handle Crises. Forming a crisis management team is a basic, but indispensable, step. The company’s director of Communications must be in regular contact with its legal advisors, director of Operations, spokespeople and Social Media team to best expedite the decision-making process. Strategic guidelines and clear definitions for deepfake incidents will allow decision-makers to develop solutions tailored to the specific problem.
  • Establish Incident Channels. When doing damage control, the difference may come down to whether information was disseminated via your own channels or through an influential journalist. You must identify the most wide-reaching and efficient channels to use in case of negative virality.
  • Identify Contacts. In times of crisis, contacts may include an official social media account or the editors of a well-known media source. These parties can all play important roles in mitigating a deepfake’s negative impact. You must form connections with reliable contacts so that major media outlets can debunk fake stories quickly and minimize their spread.
  • Hire Digital Experts. A team knowledgeable on the subjects of technology and the digital world will be the strongest force against deepfakes, from noticing fake videos quickly to conclusively exposing the fabrication.
  • Provide Clear Communication. Providing templates for brand communication may sound routine, but in the midst of a crisis, every minute counts. Prepare communication materials that can be adapted and published quickly, serving as an essential component of the crisis management process. 
  • Cultivate a Digital Identity. Just like a good soccer strategy, the best tactics start with defense. Any company’s first line of defense is its investment in a solid digital identity, both for the company as a whole and its leadership. This can expedite recovery and minimize damage before the problem gets out of hand. Why? Because in an uncertain situation, a respected CEO with a reputation for transparency will be more able to debunk falsehoods. Furthermore, citizens, journalists and opinion leaders will be more likely to believe that CEO when they clarify the truth.

In today’s environment, companies must develop strategies to cope with this particularly insidious type of misinformation. The first step is to conduct an internal audit, which is necessary to fully understand the company’s capabilities and reach, then provide all spokespeople with specific crisis management training. We maintain, as always, that anticipation is key.

We must make a moral and ethical effort to fight against misinformation and the sea of lies, but in the near future, we must also prepare for the worst. Corporate communications will mean the difference between a temporary inconvenience and a reputation-destroying scandal.

Alejandro Romero 

Partner and CEO Americas

Fernando Arreaza

Senior Account Executive at LLYC USA