Some 40 percent of large U.S. companies—those with more than 500 employees—will have incorporated chatbots or virtual assistants by the end of 2019, according to a Spiceworks survey. In 2018, around 59 percent of conversations with companies via online chat in the United States were held partly or completely with chatbots, according to a Forbes study. These studies, as well as other data, lead us to conclude that using intelligent agents to handle the growing number of conversations between brands and audiences has gone from a mere trend to an irrefutable reality.  

Immediate responses, 24 hours a day 

The mass use of smartphones has meant instant messaging has become the default communication mechanism in our personal lives. It did not take long for this pattern of use transfer to corporate and brand communications as well. Now more than ever, consumers want immediate, ongoing responses in their interactions. Multiple recent studies—including those by SalesforceClickZ and Hubspot—confirm this. And with good reason, as the main benefits consumers associate with chatbots are: 

  • 24-hour service (64 percent) 

  • Immediate response (55 percent) 

  • Answers to simple questions (55 percent)   

The pressure these expectations have put on companies has led to an environment that favors the use of company chatbots.  

Conversation experience: Room for improvement  

However, the experiences humans have with today’s chatbots are still far from ideal. According to a UJET survey, 58 percent of people stated their experiences interacting with chatbots were not as effective as they had hoped. Along the same lines, 47 percent of respondents in a Statista study said they had received unsatisfactory answers during their interactions with chatbots.  

Too much focus on technology, too little on communication experience  

To work properly, a chatbot system should go through a “training process” in which a human team provides it with information about the typical questions, answers and conversation flows it will be involved in. This process, though especially intensive during the initial learning stage, is present throughout the chatbot’s operational life. Chatbots progressively enrich their capacity to respond to more and more scenarios as they continue to interact with people.  

As often happens in the initial stages of adopting a new piece of technology, chatbot and virtual assistant rollout projects are currently most often (if not entirely) considered to be purely technical projects. However, this loses sight of the fact that they are actually communication tools with a direct impact on user experience. On many occasions, the task of training a new bot for conversation is assigned to engineers who specialize in computation rather than linguists or communication professionals with greater expertise in conversational flow.  

Chatbots and their potential impact on reputation: The case of Cleo 

Not paying due attention to a chatbot’s training process can have consequences beyond what one might initially think. The case of Cleo, a financial services chatbot operating on Facebook Messenger, is a good example of this. Designed with the goal of making it easier for people to administer and follow their spending, Cleo was trained to use colloquial, informal language and encourage users to communicate with her in the same way.  

The week of Valentine’s Day 2019, Cleo’s creators introduced a special conversation mode, designed to give a touch of “romance” to the state of its users’ finances. However, Cleo’s creators chose some rather unfortunate expressions in this new option for users, which horrified many women. Holly Brockwell, an independent technology journalist, drew attention to these messages on Twitter, noting how some messages suggested sexual violence.    

The human team responsible for Cleo were quick to respond, assuring Brockwell that the more indelicate messages had been deleted and that the (all-women) writing and training team for the chatbot had sought to subvert the stereotype of a passive, gendered artificial intelligence, rather than insinuate violence. Although the damage had already been done from the “brand reputation” perspective, Cleo’s creators took measures to rectify the unfortunate situation and Brockwell appeared to be satisfied with their response.  

Obviously, certain types of messages can be controversial, and what some see as black humor, others will consider unforgivably offensive. The responses to Brockwell’s original tweet show precisely that, with different people reading and interpreting the comment very differently. If, in addition, these types of messages are part of an autonomously-functioning artificial system, the impact on a brand’s image can be very serious indeed—not to mention harder to detect. This reality demands an additional layer of precaution and sensitivity when developing a new chatbot.  

The importance of a chatbot’s personality 

In his now popular presentation Four Cs of conversational interface CX, Microsoft’s Senior Manager of Global Engagement Purna Virji demonstrated the need to build artificial conversation agents with clearly defined personalities. A fundamental feature of every voice is its personality, and intentionally designing this persona is one of the fundamental ways brands can control what users perceive and experience. A friendly chatbot, one aligned with the brand’s style and tone, will be more coherent, memorable and interesting than a robotic, neutral chatbot with no personality.  

Designing conversations for artificial intelligence 

It is important not to lose sight of the fact that, in conversations between humans and intelligent agents, a human brain comes into contact with an artificial one, which has a radically different architecture and way of functioning. An artificial “brain” works based on entities, variables and rules, while a human works in terms of purpose, empathy and motivation. The growing ability of machines to interpret the meanings and contexts behind messages expressed in natural human language has enabled this new communication interface, but viewing a conversation as merely an exchange of messages in natural language and not considering psychological factors is an oversimplification that inevitably leads to situations that could clearly be improved.  

Carefully designing the conversations a chatbot can have to make it a more natural, persuasive and useful agent is one of the key aspects to a good conversational experience. And putting the focus on human nature instead of basic computational behavior is one of the key ways to achieve this. Most of today’s chatbot conversations lack this focus. They are often developed exclusively by engineers and programmers who, due to the nature of their jobs, are especially predisposed to adopting conversational models more befitting artificial “brains,” with psychological and social questions often taking second place.  

The limits of technology  

As much as psychological factors are those most often ignored, it is also true that the language and purpose recognition technology artificial conversation agents are based on is far from optimal. In fact, a large part of conversation model design is often allocated to considering what will happen when a chatbot fails to understand a question or goals of the human in the conversation. Some of the main limitations in the current technology include:     

  • Difficulties in understanding context. This is one of the challenges that tends to cause the most frustration among chatbot users. Chatbots can fairly comfortably manage the general contexts they were trained in, but they have major difficulties inferring or remembering the specific context of a conversation and identifying the relationship between successive questions. For example, if I ask a chatbot about the cost of tickets for a play, then ask about nearby restaurants, it is unlikely to infer that “nearby” refers to the theater location because I am putting together plans for an evening out.  

  • Limitations in decision-making. A chatbot’s intelligence is not based on models of hypothetical-deductive reasoning. Unless a decision-making process has been explicitly codified, a chatbot will be unable to use common sense when making simple decisions in unexpected scenarios.  

  • Problems dealing with unscripted conversations. Though this limitation can be mitigated by  more exhaustive training processes, one of the weaknesses responsible for many complaints is chatbots’ inability to infer the purpose behind any conversation in which the goal is expressed unconventionally.  

  • Difficulties identifying emotions. Although this is one of the areas in which the greatest advances are being made, chatbots continue to show limitations in inferring the emotional tone of a conversation—something that might lead a human agent to modify the way they handle the situation.  

Resistance to artificial agents 

Even in a hypothetical scenario where technology overcomes all these limitations and provides an excellent conversational design that gives the bot personality and empathy, we cannot lose sight of the fact that certain user profiles are very reluctant to speak with machines. In a recent CGS survey, 70 percent of those interviewed said they were reluctant to communicate with a brand that did not have a human customer service agent available. In another survey by Drift, Audience, Salesforce and Myclever, 43 percent of participants indicated that one of the main barriers to using chatbots was their preference for human assistance.  

Customer experience as the goal 

Few people appear to question the fact that chatbots, far from being a passing fad, are here to stay. Their capacity for resolving simple, common issues without delays at any time makes them ideal for a number of tasks. Their limitations in handling more complex situations, particularly those in which it is critical to understand context and manage emotions, make it essential to have people in charge of certain conversations between brands and their audiences.  

Although chatbots cost much less to operate than human agents, viewing their use as a mere cost optimization process is wrong. If we move away from seeing “improving user experience” as the main goal, we lose the main value of artificial conversation agents.  

These new conversation technologies have a major impact on company and brand communication and reputation, so communication professionals should play a leading role in their development processes. Not using a chatbot because of the possible risks it entails may mean being left behind. This is a technology with clear potential for disrupting the all-important relationship between brands and their audiences.  

Daniel Fernández Trejo 

Chief Technology Officer at LLYC 

Miguel Lucas 

Data Business Leader at LLYC 

José Luis Rodríguez 

Business Transformation Leader at LLYC