As countless research studies and considerable coverage have made clear, the rise of generative AI poses significant implications for corporate communicators. And yet, it is sometimes hard to avoid the feeling that any embrace of these new technologies remains uneasy in the public relations world, where disruption is not always welcomed with open arms.

Last year's Asia-Pacific Comms Index, for example, raised significant concerns about AI usage and familiarity among the region's comms leaders. Our Davos Roundtable earlier this year, furthermore, did not exactly dispel the notion that adoption is rather more halting than you could be forgiven for thinking.

For corporate communicators, it would appear that AI's ramifications will be widespread — in two distinct ways. First, in terms of the craft and how these platforms can automate many of the tasks that make up day-to-day comms work. Second, and no less important, are the profound questions that are raised about public trust and societal norms, which have crucial implications for organizational responsibility across such as areas as ethics, bias, privacy and the very culture of work. 

To examine these issues in further detail, PRovoke Media partnered with Ruder Finn Asia and APACD to convene a Roundtable of marketing and communications leaders that are already relatively immersed in gen AI. The conversation, which provides numerous insights into the opportunities and threats posed by AI, has been edited for length and clarity. 

Participants

  • David Ko, managing director, RFI Asia

  • Lauren Myers-Cavanagh, senior communications director, Asia, Microsoft

  • Pia Tyagi, senior regional director, communications and sustainability, APAC, Shiseido

  • Ramya Chandrasekaran, chief communications officer, QI Group

  • Sanjay Nair, global AI lead, BCG

  • Shruti Gupta, head of marketing, HyperGAI

  • Arun Sudhaman, editor-in-chief, PRovoke Media (moderator)

Progress and potential

"The question that I grapple with is how do I truly stay ahead of the curve" — Shruti Gupta, HyperGAI

Each of our panelists began by describing how much progress they have made in terms of inculcating AI into their work, and what they have learned in terms of capabilities and limitations as it relates to communication teams. The answers were eye-opening, not least in terms of the roles that are already being replaced by AI, but also the questions that our participants are asking themselves about their value and ability to stay ahead of the curve. 

Lauren Myers-Cavanagh (LMC): [Microsoft] were kind of customer zero — we took a big investment in open AI. And so, whether folks liked it or not, there was a kind of experimental mindset. I think what you saw very early on were the immediate adopters who just threw themselves in at the deep end. Then I would say the majority who sort of dipped their toe in — "what is this, look at its capabilities". And I think people thought that maybe it wasn't going to be good for this industry. I think the company did some interesting nudges to help people along on the change curve. What was quite exciting, well before Copilot was available, the Microsoft comms team was building a dedicated communications Copilot at the back end that we were customer zero for. For example, let's ask for the 10 most likely questions that Kara Swisher's going to ask Satya Nadella." Sure enough, it spits out very Kara Swisher-sounding questions on highly relevant topics that she would 100% ask Satya in a real interview situation. I am not that easily convinced, but I was like, "okay, if this is what I can do." 

Fast forward to the back-end of last year — Copilot general availability, we already have a kind of SWAT team internally who have been deployed to look at how we reinvent comms in the age of AI. They've been busy looking at who are the third party providers we can partner with to build our own tech stack. I think the working hypothesis at the outset was 'how do we go from hindsight to foresight and how we work, how we measure, how we do analytics, and what are the feedback loops?' We've announced an exciting partnership with Meltwater. What's been super insightful is getting in front of groups like this and in-house comms teams, primarily talking about how they're thinking through some of these challenges and what they think the use cases are, and how we might be able to help them based on our own learnings, but also based on the capabilities of Copilot. 

Pia Tyagi (PT): We are at that stage where we are still figuring out what this is. There's definitely a lot of curiosity on how this works and what it can do for us. But then in many organizations, there is definitely that issue of trust as well as data protection privacy. Obviously, we have that too, and you experiment in your own personal time while the company tries to figure out what needs to be done. We are talking to startups in this area, to see what are the capabilities, what sort of differentiation can they offer us?

One interesting example that I want to mention is that two years back we launched a virtual influencer, it was called Shi, with the idea of representing Shiseido as a group. I think the challenge that we faced was what does she look like? It was sort of like a 2D persona. We started talking to some of these companies about evolving her into someone who looks more real life. But that challenge, it's still there. There are cases like that where AI has been used in beauty for a while, whether you watch her try-ons or whether you have chatbots and your online beauty consultants. I think the next evolution is what we are seeing on how to be really hyper-personalised. It's marketing, communications, even data processes or finance teams — how do we kind of bring it together to pilot, start small and scale it up.

Ramya Chandrasekharan (RC): The timing of this is quite interesting — I was just asked to make a presentation to the board about how my team is using AI, talking about the many different use cases. And I realised that our IT department is barely scratching the surface compared to where we are at right now. I'll try and give you some examples of what we do.
The most obvious is content generation. We're all using it. ChatGPT kind of changed the game for content in terms of text. Then there's a whole bunch of visual stuff that we are using. Text to video, text to images, image enhancements, video editing. We used to spend a lot of money on voiceovers for videos at one time and now we're using AI generated voices. I feel kind of bad for all of the freelancers we were using, but we save so much money now on just using AI generated voiceovers. In addition to that, we now have AI generated avatars for internal communications that are doing monthly roundups of news. A lot of translation for our videos is now done using HeyGen's multilingual model. We have a text to speech component incorporated into our blog. Customers are spending more time listening to our content as opposed to reading.

The most interesting thing that we launched recently is an AI chatbot on our blog, which has actually reduced the workload of our customer support because the AI chatbot is learning. So we had to train the chatbot on a lot of data. Initially it was spewing out some really strange answers, but we spent some time training it before we made it live. It's great for customer engagement. We also use it for data analytics of large data sets to try and identify trends. So, there's tons of use cases. I actually went to the board with a presentation about a year ago and I said, here are some ways we can save money and save resources. That's all they wanted to hear, save money, save resources. And I said, this is the budget I need. And they happily handed it to me and I was lucky. Thanks to that, we have been able to save money.

"A lot of the jobs or roles within the agency will change" - David Ko, RFI Asia

David Ko (DK): I would say that our attitude towards AI is a mix of enthusiasm and trepidation. We basically stopped using real people to do voiceovers, probably last year. We recently submitted a proposal to a client who's a major bank where we are going to be using talent that are not real people, but they are not positioned as virtual humans in the campaign. You can actually generate faces now of people who do not exist. What we do is we take a made-up human and there's a casting process. We have a hundred faces for the clients to choose and we take the talent that they choose and we create all of the visuals for that campaign. Obviously, you can pose them anywhere you want. You can have complete creative freedom. And so you can imagine the amount of money that saves for the client. I feel bad for the actual people that are models because this may actually spell disaster for that industry. But it is what clients want. I think the interesting thing about that whole concept is we are not standing up and shouting to the world that this is an AI-infused campaign. If you don't watch the disclaimers carefully, you don't even know that this model doesn't exist.

For the agency world, what makes us nervous — we have the understanding that a lot of our jobs on the agency side will be disrupted or have been disrupted by this. We need to basically evolve by going back to the drawing board and imagining what an agency of the future looks like and build ourselves towards that while we're maintaining our current revenue model. So we're kind of building the plane as we're flying it and continuing to make money in the meantime. But with the understanding that a lot of the jobs or roles within the agency will change. For example, we no longer employ copywriters. We pivoted all of our copywriters to content specialists. To be super honest, I think some people that could not make the transition had to get off the bus. I think that's just a fact of life now. From the agency's perspective, I think we're disrupting ourselves much faster because the clients demand it.

Sanjay Nair (SN):  Personally, the question that always keeps me awake is 'what is the value I add to the job that I do?' The boundaries of that are getting pushed as the boundaries of what the technology can do is getting exponentially increased. I personally use AI for everything I do. I just want to know what are the limits of ChatGPT or Claude or Gemini or Dall-E or Midjourney, where do I add value? If you don't ask that question individually, we can't answer that question as an organization, as a society, and so on and so forth. At the end of the day, if you lose value as a human, this whole notion of this is an augmentation technology is not going to last for 30 years. In five years, that augmentation will shrink to the point where the human value to the augmentation will be this much, unless you pivot and move up the value chain. I just want to know how long can I stay relevant as a professional because I grew up as a consultant, and for me it's important to know where I add value.

So how can you push the envelope of innovation? One of the things I'm personally very excited about is this new conversational chatbot we've come up with, which is called Gene. We've trained Gene on our own data, we're not infringing on a copyright or anything, and we've trained Gene on our consultants and their knowledge. Now we are using Gene as an author or co-anchor on podcasts. Gene is both interviewing our experts, but also being the interviewee for conversations. That just shows you how much potential this technology has in terms of replacing or complementing humans. Personally, I am super excited because I am so much more efficient, so much more productive, so much smarter than I ever was because I have these tools available for me. At some point I'll be scared, but right now it's the enthusiasm space.

As an organization, what we are thinking about is not use cases. We are looking at workflows and reshaping the entire workflow. So if it's marketing, you break it down into measurement, you look at data collection, you look at what are you doing with analytics and what are you doing with visualization. You automate the whole thing. If you're looking at content, doing more with less is just table stakes — how do you personalize, how do you localize? When you think of strategy and ideas, there is no way you can compete with someone who's using generative AI tools on brainstorming, on coming up with mock ideas and testing those ideas. So if somebody is saying, "We don't use it yet, we don't feel the need," there is no way you can compete with those who are, so people will be forced to use it if they're not so far. 

Shruti Gupta (SG): I really resonate with what you said about "how am I adding value". I think working in a GenAI company just takes that conversation ahead multiple times. The question that I grapple with is how do I truly stay ahead of the curve. As a company, how do we stay ahead of the curve? And as a marketing professional in a genAI company, how do I stay ahead of that curve? Can HyperGAI, as a company, go ahead and improve the foundational model of this ecosystem? If you look at the ecosystem, it's an inverted pyramid. There is a foundational model. Then there are platforms, and then there are applications which are built on the platform. Most of the times you would see technology companies working on the application level, which is a vertical chatbot or a vertical text to image or a vertical text to audio or a vertical text to video. But there are very few companies who work on the platform level. And there are even fewer companies, because it requires computing power, more money, more people, more resources, working on the foundational model. Microsoft is one of those, Cohere, Stability, Google. And these trillion-dollar companies have great commercial interest at the center of it.

As a foundational LLM company ourselves, our question then becomes, are we doing anything different than the trillion-dollar companies that exist in the market? It's a very existential question for a marketer like me on how to really stay ahead of the curve. We have a product which is called HyperBooth, we just launched it three weeks ago. It's a text to image generator, and there are many text to image generators. Most of these models, when we began, required 10 input images to train the model. What HyperBooth is currently doing, is that we only require one input image because we train the model that way in 30 seconds, which requires fundamental process improvement. So this is one of the ways that we are trying to do our bit in staying ahead of the curve or doing slightly better things than the others. But I think still the question remains as a marketer, how superficial can I afford to be about using genAI? Especially with the pace of change that's happening in the industry, it's crazy to keep up with.

Fear and loathing

"How do you justify your existence anymore?" — Sanjay Nair, BCG

Despite the enthusiasm and immersion demonstrated by our Roundtable participants, there was considerable sympathy for the sense of caution, and even intimidation, that characterises many people's interaction with these new technologies. According to this group at least, it is probably not an understatement to describe these concerns as "existential", given how they strike at the fundamental value that communicators and marketers have provided to organizations for decades. Meanwhile, their comments also revealed issues around copyright, disclosure, ethics, organizational inertia and cost — none of which are necessarily easy to solve. 

SG: Legacy marketers become legacy marketers because they spent years perfecting the craft, and suddenly there is this gen AI model that's spewing out perfectly crafted words and demonstrating sociocultural understanding of the market. That's pretty intimidating to a lot of legacy marketers. I think that's the biggest fear that people have — 'what real value do I bring to the table now?' If somebody else can write, somebody else can figure out the tone of a certain journalist, then what does that journalist really bring to the table to stay relevant or to stay differentiated or to stay unique? How do you justify your existence anymore?

DK: I think that the content generation part is where people fear for their jobs. That is something that's very real and we do hear a lot about it. On top of that, is the 'ick factor'. For example, we deploy Copilot across Ruder Finn. In the training, there was one example where I was demonstrating Copilot and I was asking Copilot to summarize all of my conversations with a certain colleague in Ruder Finn, and we're in this Teams call with 40 people. And my colleague's like "I feel very violated." And I said, "I sympathize, but imagine how much time this is saving you." That's one resistance factor — people are coming to terms with how this reshapes the way that information is being shared across the enterprise.

The other 'ick factor' is, one of my clients is a major whiskey brand, and they have these brand ambassadors (BAs), across the world. The brand ambassador's time is very precious, so we cannot always ask them to do a photo shoot for a social media campaign. We had proposed to the client that we're just going to use AI generated images of them where they don't have to do the shooting. So we have this BA in Hong Kong, where we were showing him pictures of him in Tuscany on a terrace drinking whiskey. And he was pretty offended. He said, "I don't like this." So I think it's an ongoing education process where eventually they have to realize that we're not replacing them because we're reinforcing his position as a BA, but we're giving him a lot more ways to express his brand in association with the whiskey brand.

We also, as practitioners, have to be careful with a lot of these issues around copyright and  disclosure. When do we disclose that we're using AI generated visuals? If it goes out with a press release, a photo, do we tell people that this was AI or manipulated or photoshopped or whatever? My answer to that is that it depends on two things, the intent of the sender and the expectation of the receiver. I think in an editorial situation, you have to disclose. Either you have to disclose or you just flat-out have a guideline of not manipulating images. I feel like the pendulum is going to swing in the other direction now where flaws in the visuals are going to be celebrated as signs of authenticity. Don't remove that lamp post that's in the background, keep it there and tell the world that this image was not manipulated.

SN: If you are clear about how you are growing with gen AI, enhancing the value you bring to your customer and improving the quality of work that your teams do, rather than talking about efficiency and the number of cuts you're going to make, I think you will have more people inclined. Plus, it's also this notion of not using traditional training formulas because those don't work. It's a culture of experimentation that you have to encourage among teams because how marketing uses, is going to be different to how HR uses it.

No other technology has moved at the speed at which this is moving. So, to have some sort of resistance is absolutely natural. This is within 12 months of ChatGPT, we are already talking about company-wide deployment. That's where leadership needs to come in and be clear about why they're doing what they're doing and how it's in the benefit of the employees and the teams, as much as it is for the business and organization. My gut feel is that it's still more the organization taking longer to embrace than the people, because the organization hasn't figured out the most responsible way of deploying this. They haven't figured out which LLM platform to invest in, how to build your proprietary database and how to train. There's a huge cost component to this. Productivity gains are good, but it's not quantified yet, so how am I going to justify? So organizations are thinking it through.

"If something goes wrong, then we have to bear the brunt" - Pia Tyagi, Shiseido 

PT: Especially when you have large, complex organizations, you have multiple approval processes. Suddenly you have your legal teams or ethics, your compliance weighing in on this. There's definitely appetite. If I look at my function, generally what I see is in in-house roles, people tend to operate in leaner teams. So I think the whole thing about whether it's going to impact my role, my team, maybe not. Whether it's going to drive efficiencies, absolutely. Whether it's going to save costs, absolutely. We can't wait to get our hands on it, but we are as fast as the decision making. I don't think communications will decide what that looks like, but if something goes wrong, then we have to bear the brunt. 

LMC: I wouldn't in any way overestimate the degree of sophistication at the board level or at the C-suite level for most multinationals. There's often a mismatch between board and C-suite and the rest of the org. The budgets don't always talk to each other. Organizational readiness is almost like a late stage concern when there's mismatch and misalignment at all other levels of organizations. I think what it comes back to is really being clear and explicit with your people. What is our purpose and what are the values and what is the culture that underpins who we are? And something I give a lot of credit to — my organization under Frank Shaw's leadership was, at the very outset of all of this, 'pause, what are we here for? Who are we? What mustn't we lose as we go along this journey?'

That culture of experimentation was already really hardwired in an organization like ours, and in a function like communications. They had been early to a lot of the nascent capabilities of the internet, really early to blogging and owned content, really early to integrate a crisis communications function embedded in our organization. In this moment, what is it that we're recruiting for? It's so foundationally about culture and value and mindsets, and are you hardwired to be a diplomat and find alignment and misalignment in organizations and bring people together to build a consensus? Because if we're all going to be augmented and amplified and finding productivity hacks in the way we work, what is it that is fundamentally human about the work that we still need to do to get jobs done?

RC: I have an example to share. My team manages the majority of website content. At one point we decided we wanted to have a separate FAQ section on the website. We went to the IT team — they told us it's probably going to take three months. So, one of our guys went into ChatGPT, asked it to generate a code for a simple FAQ's page, use that code, found a way to get into the website, built it, it took all of 45 minutes for us to build an FAQ's page without the help of IT. And when they found out we'd done this, there was this shock in everybody's face and then instantly the insecurity that came in. So there's also a skills gap, within tech teams, I'm realizing. They're not ready for the AI revolution, which is also, one of the reasons why the comms and marketing teams are way ahead. I think we're just naturally curious people and we want to experiment all the time. My theme to my team, especially because I got lucky and got this budget approved, experiment, experiment, experiment. Some have worked, some haven't worked, but it's really about the culture that you try to propagate.

DK: What we have found out is, clients themselves are not officially deploying it in their work, but they are very actively studying it. The expectation is that within a very short time, maybe a year, maybe less, once they get over that initial hump of resistance, they're going to look to their agencies to ramp up very quickly. We're kind of paddling very busily under the water to get ourselves fluent so that we're ready when the demand comes. Although a lot of people are not officially using it in their day-to-day workflow, they're using it.

Surviving and thriving

"Job satisfaction has got to be at the core of it. How are we fundamentally resetting expectations around work?" - Lauren Myers-Cavanagh, Microsoft

According to CommsIndex respondents, two benefits stand out in terms of AI tools — productivity and cost savings. Both of those ranked much further ahead of such areas as knowledge sharing, messaging consistency, teamwork, collaboration, and content accuracy. Which, based on our panellists' comments so far, begs the question — are comms leaders misunderstanding the benefits that AI can bring? And even if they are, what does the rise of AI actually mean in a workplace that is often defined by 'shallow tasks' and endless meetings?

RC: I was talking earlier about how I managed to convince my board to give me money, which is really by pointing out the long-term cost savings and the manpower optimization. But not everybody understands it. How do you quantify productivity? Until there's a way to measure that, it's going to be a bit tricky. What I'm finding is now I find that because the routine tasks are all being automated, the humans that are involved actually need to have a more strategic shift in their mindset.

SN: The definition of smart has changed. When you look at talent and you think about who is smart, there's a traditional definition of IQ, EQ, and they're able to produce this and this. What is changing is the ability of a person to use these tools as superpowers to create better impact than others. That's what companies will begin to start looking for. And the other thing is the element of imagination. We can't be limited by what is here and now. It's an ability of an individual, a leader, and organization to imagine what is possible and then drive towards it. You ask the question on agency model — if your client's not asking, they will. It's just inevitable. It's just a matter of time because they will deploy it in-house. And when they do it in-house, they'll expect the agency to perform in the same way. And if the agency is not ready, somebody else who's ready will outcompete you and so on and so forth.

But then the agency needs to imagine, what is that value I bring to the table? Because there is no substitute for expertise. There is still no substitute for experience in that sense. But if you wait for the call to come from the client to start thinking, it's already too late. You have to imagine what that is and start working towards this. As an individual, I just feel like you reassure people that this is not about displacing you or finding efficiency. It's more about empowering you with these tools so you can be much better than you ever were. You'll find people embracing this. And then you'll find some are smarter than the others in using, which has always been the case, and they'll advance faster. And that's just natural.

PT: You make a business case to your stakeholders and it's the language of the business that's tangible. Cost optimization, productivity, efficiency, that's what your business stakeholders understand. Whereas you, as a comms professional, do recognize that it enhances my creativity, the quality of the content, it's more qualitative versus quantitative. So I wonder if that has an impact on the responses that people give as well, on what is largely understood by the business stakeholders to really drive that adoption of gen AI.

SN: I think companies should use joy as a differentiation. So instead of saying, 'you'll be more efficient, you'll enjoy your work better'. People will be more receptive to using these tools in that way. I think there is a lot of nuance when you try and quantify things versus, to Lauren's point, speaking like a human. Explain that this is good for you. You will be able to do stuff that you've never been able to do and you'll enjoy that process. You'll create better work for your clients and for organization. You create better impact. But let's stop talking about, I'm going to cut my costs by 20%. That is good for boardroom, but it's not a very motivational.

"I think companies should use joy as a differentiation. Let's stop talking about, I'm going to cut my costs" - Sanjay Nair, BCG

LMC: There's no question. Job satisfaction has got to be at the core of it. But to me, the elephant in the room — the productivity puzzle is really fundamental. There hasn't been a way to quantify knowledge work in any meaningful way that compares apples to apples. This is foundational. I think what is deeply uncomfortable about a lot of the discussion around productivity is exactly what you're getting at. What do you do with the freed-up hours at various levels of the organization? Is it just do more and keep churning with your content engines? And let's not forget, we live in an algorithmically distributed and amplified world. More content...where does it all go in the end? Organizations are going to have to confront this sooner or later. How are we fundamentally resetting expectations around work? Because when I think about the various jobs I've had over the last decade, the days I’ve spent living in Slack or Yammer or Teams or whatever internal communications platform...it's incredibly shallow work. And the amount of time we spend talking about work at work I don't think has gone away. I think these two trends or habits are colliding at the same time and it's pretty existential.

SG: I feel like there is a very strange shift that has happened across the organization where suddenly, because of gen AI, you're supposed to be more strategic in everything that you do. There are strategic expectations from an intern. There are strategic expectations from a temp worker as well because the times have changed. Now the expectation is, "Hey, you could do research using ChatGPT. You could use Scholar GPT to read 10 journals that cover this topic, and you should have done that before you even suggest an idea like that." It puts tremendous pressure on people to perform, find their value, and to continuously deliver on the improved expectations from everybody. It all boils down to how you define joy, how do you define satisfaction, how do you increasingly be a human in a more AI-connected world? It is really existential in nature from that point of view. How do you survive? How do you thrive?

DK: The elephant in the room that people don't always talk about is that not everyone gets to go on the bus. I think that's the scary part. There is more demand now for a shift towards more strategic thinking, because a lot of the work that used to be done by people who don't need to be that strategic, it's now being automated. So people are expected to shift up, but not everyone is equipped or has that experience or that aptitude to actually do that. What worries me constantly is, what are we doing? I think we're going to be okay. This whole generation is going to be fine. What's going to happen to that next wave of people that's coming after us? I don't have an answer. If we're shifting into the dark side of AI right now, that's one question that bugs me.

The other thing is the dead internet theory. Where people are saying that with the advent of bots in AI, a lot of the human interaction and engagement on the internet, whether it's on Reddit or social media or whatever, they're actually not people. They're just bots. And so I think some estimates put it at 70% of the human-to-human interaction you see on the internet is actually bots. And that is another very scary thing because, what does that mean for human discourse? What does that mean for politics? What does that mean for global stability? That's the dark side, that's the scary part. And I don't know that we have an answer.

Trust and backlash

“There is definitely a backlash. Companies are starting to say it's too hyped up” – David Ko, RFI Asia

The shift in tone in the conversation, towards some of the more existential threats posed by AI might surprise those of us that are more accustomed to the undercurrent of hype that often accompanies the rise of a new technology. But these questions, according to the Roundtable participants, are incredibly important to ask — particularly in terms of how gen AI will impact public trust and societal norms.

The social media era does not suggest that we can necessarily trust people to use these tools responsibly, but there is little doubt that communicators will be front and centre when it comes to these concerns. Meanwhile, it seems increasingly clear that an 'AI backlash', whether in response to hype, or to economic displacement, will only serve to complicate matters further. 

SN: I'm an optimist, so I'll always look at things from a positive point of view. I think that the train left the station as soon as ChatGPT came out. There's no going back. At the same time, you cannot improve something you do not use and you're not a part of. So you need to be part of the change. But, each individual also needs to start asking the right questions and challenging some of the things that are being told. I think a lot of tech companies are doing the right thing, in talking about how you can make sure that these LLMs are not going rogue. But, as a society, what are the checks and balances? The EU Act that came out puts some guardrails and boundaries in terms of what companies can, cannot do. More is required for sure because right now, we are only talking about AI, but we are five years away from quantum. You combine quantum and AI together, you're talking about a different level of complexity that is going to be unleashed on us. There are a lot of questions to be asked. You can't blindly listen to all the positivity. Everybody talks about, "Oh, AI can address climate change." In the past two years, the one single biggest contributor to climate catastrophe has been AI.

Governments need to start acting, putting some protections. And how do you hold companies responsible? How do you hold your own teams and your own leaders? How do you hold yourself responsible will determine how this pans out.

LMC: In the spirit of talking about elephants in the room, I think you have to add nation state actors into the mix. It's easy for us to sit around talking about companies and individuals and organisations. You have nation states who are prepared, deeply funded to operate at scale in ways that are very, very difficult to disrupt.

RC: We need to come up with the ethical AI policies and frameworks. That becomes really important. And it can't be dictated by one person. It has to be multifunctional. You've got to have legal, ethics, marketing people, because AI is impacting everybody. I don't think it's the perfect solution, but I think it helps to some extent.

AS: Are any of you starting to see a backlash? We have agencies that have banned the use of content produced by AI already. And, indeed, that can offer a point of differentiation. 

DK: There is definitely a backlash. I am seeing it in conversations with brands where they're starting to say I don't want an AI-led campaign. What we want is a clever way of doing what we're doing better, but we don't necessarily have to tell the world that we're using AI for it. I feel like people are now saying, we love AI, we think it's great, we think it's going to make a difference. Show me how it is as a tool instead of a hype engine for marketing campaigns. I think that's where it's shifted.

PT: Wearing a skeptical hat, and it's something that has come up in our conversation, is it the next metaverse? Also, as an organization, we are made up of different brands. Each brand has its own way of working brand voice. So there is that skepticism as well. What can AI really do to make us differentiate and stand out?

SN: At the end of the day, it's not about using AI for the sake of using AI and just for cost-benefit analysis. It's about the value you bring to the consumer that you're trying to solve for. The overall notion of backlash is slightly different. If you think about everybody just very casually saying that the IMF estimates 40% of the jobs will be displaced. But what happens when there is so much displacement in white collar jobs? Does it not affect purchasing power of people? Ultimately, does it not affect demand for the products that you're selling? Does it not affect eventually the revenues you make? From an economist's perspective, any massive displacement of any segment of the society in that way is not going to be good for anybody, whether it's business or government. So we need to do this in a responsible, measured way where we take everyone along. Rather than just thinking, I can make an additional $10 and impress my shareholders. You can improve efficiency or you can improve effectiveness. The more right choices leaders make, the better outcomes we can move towards as a society. I don't know what will happen, but I do foresee a lot of backlash coming.

"We need to come up with ethical AI policies and frameworks" - Ramya Chandrasekharan, QI Group

LMC: So I've now had the distinct privilege of being in two companies where journalists have a love-hate relationship with us. I had it at Twitter and I have it now at Microsoft. I think the filter through which the hype cycle has been driven and most of the public conversation has been shaped is on the back of what journalists and other influential voices have been saying about it. We can't discount the crisis that the media has found itself in, now exacerbated even further by the existential crisis posed by AI. "What is this going to mean for my newsroom? We're already in absolute trouble financially, newsrooms have shrunk and the distribution channels have narrowed." And so I think the fact that this hype cycle has been driven with that backdrop is super important. And I don't see that narrative shifting just yet. Because in most cases, newsrooms have not adopted this wholesale.

RC: Within my own team, I'm realizing that sometimes over-reliance on AI is also a bad thing. Because I pushed everybody to experiment. Suddenly I realized that all of the content that was being churned out was starting to lose a lot of the authenticity. When it comes to creativity, you still need that deep contextual understanding that comes from human emotional intelligence. And AI can't always effectively translate it, so there's that risk as well.

The other part is AI generated misinformation, which is what everybody's afraid of, especially with the rise of deep fakes and voice cloning and everything else that you have going. I personally know of people who've lost money thanks to voice cloning because they thought they were talking to somebody they knew. So deep fakes, election year, there is reason to be cautious obviously. That's probably where a lot of the backlash and resistance is coming.

SG: AI is definitely not hype anymore. I was working with OkCupid and I launched OkCupid in India and South East Asia. We used to have these deep brainstorming discussions with our technology team on how can we positively impact the culture of a country to become more progressive, to adapt to a slightly better view of feminism, to be more open to online dating. For example, in India, so many marriages are still arranged marriages. And there is prevalence of young marriages and people don't date enough, people don't test their partners enough, which eventually leads to life dissatisfaction. As a dating company, how do you positively use AI in improving the way people interact with each other? We discussed helping people create better dating bios using AI, because people have a natural resistance to expressing themselves well, could be because of language barriers, personality barriers, various socio-cultural taboos and myths around expressing oneselves. Can AI be technically helpful in improving their self-expression? This is a great example of actually seeing how AI can positively impact the culture of a country as well.

Concluding advice

"Stay curious. Drive, not be driven" — Ramya Chandrasekharan, QI Group

The discussion concluded with each participant offering their closing piece of advice when it comes to helping comms execs gear up for the AI era. 

RC: I think it's important to continue experimenting, but it's also important to maintain that human oversight because I don't think the capabilities of AI are quite there yet. It's not yet sophisticated enough to translate human intelligence and the emotions that are required to bring in that level of authenticity. So while you continue to experiment, you also need to make sure that you don't over rely on AI.

PT: I would just say, stay curious. As comms professionals, we need to be on top of what's around us, what we can use to our advantage. Drive, not be driven.

LMC: I think revisiting your purpose and the culture and the values that underpin your organization, and having that, be at the center of how you pivot on this. You may have a culture of experimentation, a culture of curiosity, a culture of urgency, whatever it is. I think re-grounding on it every so often and probably at a faster cycle than you ever have is really important because you might lose people. You might be losing people intentionally, but you might be losing people emotionally as well. And I think that's a risk. 

SG: You can positively use AI to do almost everything (well), but stay creatively flawed to make it even better. That could be one differentiating factor that, as a marketer, you should retain.

DK: I would say, be fearless and fearful at the same time. Fearless in terms of experimentation, looking at the potential for transforming the way that we work, live, learn, and play. But fearful, in being very deliberate in thinking about the jobs side, and how it affects human interaction, geopolitics, stability of the world, all of that stuff.

I pivoted from PR into digital marketing around 2012. During those days, digital marketing was all about social media. I found myself on this social media train of just riding the popularity of Facebook and that was my business. And then we are where we are today with social media. At that time we were all on this enthusiasm train. As we all now know, [social media] ruined the world a little bit. I feel like we should take those lessons and not let that repeat. Just being very aware of the dangers and regulating on a global basis, I think that's really important.

SN: I would have said exactly what Pia said. First and foremost, for a comms or marketing person, use these tools shamelessly. Unless you use them, you will not know the limits of where their value ends and your value begins. You should learn by design because that limit will keep getting pushed. You will have to keep evolving as a professional.
And I would hope for all of us is to bring others along. If you have the privilege or the advantage of knowing, bring more people along so that collectively we can benefit.