NEW YORK — Omnicom Public Relations Group has launched a cross-agency consultancy to help clients assess and mitigate the risks to brand reputation associated with the adoption of artificial intelligence-based technologies,

The AI Impact Group is led by Andrew Koneschusky, partner at Omnicom public affairs agency CLS Strategies, and the 10 core individual members of the group – which will flex depending on client assignments – also come from FleishmanHillard, Ketchum, research and language strategy specialists Maslansky + Partners, Porter Novelli and public affairs firm Vox Global.

The aim of the consultancy, believed to be an industry first, is to use research and advanced analytics to assess the communications challenges, brand and reputation vulnerabilities and other internal and external risks associated with a company’s adoption of AI, and then produce an “AI Roadmap” including areas such as messaging and scenario planning.

Koneschusky told the Holmes Report: “We’re embarking on a brave new world. AI in and of itself is neutral: there are benefits, but there are also significant risks, and it’s a smart insurance policy to go through a risk assessment about how AI in the context of your company is perceived. You can’t erase risks, but with a proper plan and roadmap, and understanding the concerns of different audiences and stakeholders, we hope to help companies mitigate them.”

The idea for the group, he said, came from observing a pattern to how new machine learning and AI-based technologies, from voice and facial recognition to automated vehicles, were introduced by companies and brands:

“I’ve been working on the reputational impact of emerging technology for several years, including issues around perception and regulation in the commercial drones sector. In the work that I and others across Omnicom were doing, we started seeing that the technology usually comes first, then regulations catch up eventually, but lagging far behind are perceptions of the public and key interest groups such as policy makers, activists, consumers and employees.”

Omnicom believes there’s a clear business case for the group’s inception: according to the International Data Corporation (IDC), worldwide spending on AI will reach $57.6 billion by 2021, compared to $12 billion in 2017; in a November 2017 report on AI-enabled automation, Forrester Research recommended that companies invest in change management and PR to deal with adoption of AI.

Koneschusky said: “Investment in AI will quadruple in the next four to five years so there is a tremendous need and opportunity to think about communicating effectively on the front end to pave the way for the introduction of technology. Companies that do that will have a greater likelihood of realising the benefits of AI, while those that don’t could be subject to backlash, for instance if jobs are lost, if customers are frustrated, AI is implemented poorly or not communicated appropriately.”

The AI Impact Group has already released its first piece of research, the AI Risk Index, which scores the reputational risk of industries and companies adopting AI. The inaugural index is focused on the retail, manufacturing and transportation industries, with other industries, most likely healthcare, to be added in future.

The study quantifies a brand’s risk based on its positioning around AI and the perceptions of consumers, industry employees, policymakers, activists and industry analysts. The research shows that AI poses serious risks for all three industries studied, with even the most forward-thinking companies still having work to do in terms of preparedness.

Koneschusky said the group had anticipated scores would not be high, but was surprised by the extent of the variations in preparedness within industries: “Even the technology companies are unprepared. Some companies are doing better than others, but no-one is knocking it out of the park.”

He added that four clear reputational risks emerged, based on the most common concerns of the stakeholders the team talked to: “A lot has been written about AI and robots taking jobs. Our research shows that’s definitely the top concern, but number two is hacking and data security. Fears around the safety of AI-powered machines such as driverless vehicles, and wider ethical issues such as algorithm bias, also rose to the top.

“Silicon Valley may be hiring ethicists to look at big questions around AI, but no-one has all the answers. We’ll track the results as we repeat the research, and we’re excited to roll up our sleeves and start digging into some of the unique challenges that companies are facing."