WASHINGTON, DC—With 40 national elections taking place in 2024, the threat of false narratives created by artificial intelligence has never been more serious, The Atlantic chief executive Nick Thompson told a PRovoke Global audience in Washington, DC, today.

“One of the most important priorities for all of us over the nexr year will be not to buy into AI generated disinformation narratives,” said Thompson, who appeared in conversation with BCW chief executive Corey duBrowa at a session focused on The Future of AI in Communications.

“We are not a breaking news organization, so we don’t have to take an immediate position on issues like who bombed the hospital in Gaza, which is the greatest threat” Thompson said. “But we are very aware of the risk of AI generated fake news, and deep fake video. And we also have to be aware—as we have seen in Gaza—that you will have true narratives that people won’t believe because they say they are generated by AI.”

While AI is enabling and empowering false narratives, it will also provide communicators with the tools to address those narratives, duBrowa said, citing a new tool developed by the agency that uses cognitive AI to evaluate disinformation threats and uses generative AI to message against those threats.

So while AI will create new kinds of disinformation—“the internet is becoming a low trust environment, and that makes every single transaction harder,” Thompson said—there is also the potential for AI to be part of the solution identifying and addressing disinformation.

The conversation captured both Thompson’s enthusiasm for the way AI will impact and transform communications and journalism, and the concerns that derive from uncertainty about the development of any new technology.

Thompson began by talking about the way in which he was using AI in his personal writing—creating a book for his children about the “animal World Cup” and having AI write theme tunes for the animals—and using “prompt engineering” to improve his writing.

“You should be using it every day,” Thompson told the audience. “I am very confident it is going to be important to all of us, and I tell my people, you don’t have to use it in your work, but please just learn.”

The Atlantic, he said, had built a number of tools that use AI on the commercial side of the business: “We are building a tool to make sure all of our sponsored content to make sure it is FTC compliant.” On the editorial side of the business, however, the company has been more conservative: “On the editorial side, if The Atlantic was to write a story that pulled a quote from Open AI there would be controversy.”

The discussion also turned to the importance of transparency in the use of AI: Thompson is insistent that companies, including public relations agencies—need to be transparent whenever AI has been used to generate content, but he is also skeptical about claims that use of The Atlantic’s content by AI is a threat to traditional publishing.

“The Atlantic is the 21st most commonly cited source in Google,” he said. “My book is in the database.  My desired outcome would be to have a couple of trusted partners we work with, but I don’t want that antagonistic dynamic where we are accusing people of stealing our content. I understand that there’s a lot of money at stake. We are going to lose money because of this. That doesn’t give us a right to an equivalent amount of money. Change happens.

“There were parts of publishing industry that got mad at Google for stealing our traffic and wanted compensation for that. There are people who say Google stole our ad money. My reaction to that is, you’re saying they out-competed us.

Speaking just days after the White House issued an executive order on AI, Thompson said he thought it was important that government study the way AI impacts employment and disinformation. “We need to identify the most important issues in AI and understand the impacts,” he said. “That will have a relatively limited impact but it’s unambiguously good.”

On the other hand, he expressed concern about using the Defense Production Act to require companies building especially large, widely applicable AI models to disclose details to the government about their training and safety testing process—and to use “red teaming,” a process that involves companies attempting to “hack” their own tools to understand security risks.

“That’s probably bad, because it will mean anyone who wants to do this will need to have a million lawyers, which is in the interest of the very large tech companies that were at those meetings in the White House and lock out challengers. On the one hand you should ‘red team’ these models, but at the same time you should have a truly competitive market.”