When and why do chatbots make a mess of customer service?

Robot
istock
Neil Davey
Managing editor
MyCustomer.com
Share this content
Tags

Organisations installing chatbots to shore up their service proposition might anticipate a few teething problems – as is the case with many such projects. But they wouldn’t expect the chatbots to become foul-mouthed Hitler-loving sex robots – within the first 24 hours. 

Yet this is what happened when Microsoft launched its artificial intelligence chatbot Tay in 2016. Programmed to learn and imitate the speech patterns of Gen Y-ers, it was quickly manipulated into hurling abuse at other Twitter users, spewing Nazi-sympathising statements and making racist remarks before being taken offline only 24 hours after its launch.

And it isn’t the only high-profile example of chatbots gone wrong. Google’s Allo, for instance, was derided in the national media for responding to a user request for showtimes for the movie Snowden in the UK with a long list of movies playing at a cinema in the US called UA Snowden Square Stadium. Meanwhile, performance issues are forcing some of the major players behind the chatbot revolution to reappraise their expectations.

Most notably, Facebook is said to be scaling back its ambitions and refocusing its application of AI amid high failure rates of its chatbots, with The Information reporting that it could only fulfill around a third of requests without human assistance. In light of these disappointing results, the report suggests that Facebook is now training Messenger bots to handle “a narrower set of cases, so users aren’t disappointed by the limitations of automation.”

But while Facebook is resetting its expectations to be more in line with the current limitations of the technology, Forrester has warned that many deployments will continue to be over-ambitious. The analyst has predicted that as adoption rises to unprecedented levels in 2017, we can expect a surge in the number of failed chatbot projects, with many misguided deployments actually making service worse.

The reputational damage that these failures could cause chatbots is a big concern, with the public already somewhat uncertain about the platforms. A recent survey by Nuance revealed that fewer than one in three (30%) consumers today are confident in a chatbot’s ability to successfully help them when it comes to customer service. 

Shortcomings and limitations

Having already explored the potential that chatbots have as part of the service mix (see previous article) it would be a shame for the platforms to be subjected to a public backlash due to short-sighted implementations, and a misunderstanding about both the strengths and weaknesses of the technology. Indeed, to fully appreciate chatbots’ place in the service ecosystem, it is important to understand their limitations. And there are a few.

For instance, one of the problems often reported is how specific the language needs to be in order to elicit a response.

Yaniv Reznik, CPO & SVP of customer success at Nanorep, explains: “Many of today’s chatbots operate on a keyword-driven system, meaning customers must phrase their questions in a specific way, or else it will come back with no answers - or even worse, an irrelevant response. 

Many of today’s chatbots operate on a keyword-driven system, meaning customers must phrase their questions in a specific way.

Forrester research shows that 73% of consumers say that the most important thing a company can do to provide a good experience is value their time. These keyword-based systems do the exact opposite, forcing customers to repeat questions and manouevre their wording to try and get the most relevant response. After a couple tries, these customers have already given up and moved on to your competitor.”

Jerry Daykin, global digital partner at Carat, adds: “For now, chatbots are essentially decision trees able to pull out common information or frequently asked questions. If a user deviates much from that standard script the chatbot is unlikely to be able to help, and in many cases, will need to divert them back to a traditional human contact.

“They certainly lack empathy or lateral thinking abilities at present so may be frustrating to a user making a delicate or complicated service request, especially if they don't realise it's an automated response. Like an automated call centre, they can be hugely frustrating if the option you want to talk about doesn't seem to be present, or they can't understand your inputs.”

Bad chat

As Daykin notes, with the technology not sophisticated enough to have empathy, they can miss important emotional nuances that would be detected by human agents.

“Contact centres are full of highly-trained, knowledgeable people with problem solving and negotiation skills that are very difficult to replace by chatbots,” says David Rowlands, contact centre director UK and EMEA at 8x8. “The technology is not sophisticated enough to have ‘truly human’ interactions conveying subtle nuances such as humour, wit and warmth. These ‘softer’ skills, like showing empathy and charm, are vital for an effective contact centre, but are virtually impossible for robots. 

“Building a relationship and creating a dialogue is key to getting to the bottom of a problem and making sure it’s dealt with properly. However, you’ll have to look to science fiction to find an instance when a computer has built a relationship with a human.” 

Virtual assistants and chatbots are at the forefront of this discussion at the moment. But how good are these chatbots at chatting?

Dr Nicola Millard, head of customer insight and futures in BT's Global Innovation team, believes that one of the main weaknesses of chatbots is… well, their chat.

“In the contact centre space we have a strange relationship with conversation,” she notes. “We often regard it as something that needs to be ‘optimised’. Needless pleasantries at the beginning of a call into a contact centre costs money – just having a “how are you” conversation can add an extra 10 seconds to a call. Scale that up to a million calls and you get 10 million seconds – which equates to a LOT of cost.

“Automating conversations, and taking out all of that expensive redundancy, is an attractive prospect. Virtual assistants and chatbots are at the forefront of this discussion at the moment. But how good are these chatbots at chatting?

“Although an eight-year-old can happily engage in conversation, a machine learning algorithm can struggle. Conversation is a surprisingly complex activity to automate. A recent study from McKinsey estimates that it has a low potential for automation, at just 20 per cent. This is because conversation typically needs not just the ability to process language and context, but also social and emotional cues. We know that customer experience isn’t solely about functional exchanges of data – especially if the customer is in crisis mode.”

Narrow purpose

What becomes clear from these shortcomings, is that chatbots are quite clearly not ready to take on all service enquiries.

“Organisations with a higher than average mix of emotional or complex enquiries (for example, local housing authorities, chronic illness or emotional health charities) might consider retaining more advisors to accommodate their customers’ specific needs and conduct sensitive conversations,” suggests Colin Hay, VP sales at Intelecom UK.

“The same applies to servicing the communications preferences of certain customer demographics, particularly the less technology-confident older generation. From a business perspective, it doesn’t make sense to dismiss the spending power of the silver pound let alone alienate a huge section of many companies’ loyal customer base. For these sections of the market, there will always be room for the human touch.” 

Organisations with a higher than average mix of emotional or complex enquiries might consider retaining more advisors to accommodate their customers’ specific needs and conduct sensitive conversation

Forrester, meanwhile, believes that the new chatbots that will succeed in the coming year will be those that stick to a narrow purpose, such as asking questions instead of presenting a form. Those that will attempt to tackle too broad a domain – and it believes most deployments will fall into this category – will struggle and, unlike Google’s Allo, will simply not have enough users to learn rapidly from their mistakes.

Customers themselves feel more comfortable with a more limited remit for chatbots, and Nuance research suggests that consumer confidence is low when it comes to using chatbots for tasks beyond simplistic general-purpose activities.

It indicates that the majority of consumers (71%) who are using chatbots today are primarily searching for news or information, playing music, or playing games, with few consumers confident in a chatbot’s ability to assist with more complex and domain-specific activities such as handling their utility (35%), banking (29%) or insurance (16%) needs.

Two-pronged

What emerges here is the need for a two-pronged approach. First, deployments need to have a specific, narrow purpose. And secondly, organisations need to set appropriate expectations of the chatbot experience for customers, as well as offering the opportunity to speak with a human agent as an alternative.

Matt Weil, head of product at VoiceSage, says: “When it comes to defining the most effective contribution of AI (artificial intelligence) in the contact centre a lot of us are not getting it right. The market seems to have decided that chatbots – AI-powered, programmatic ways of interacting with customers that fully automate the process – are a mature proposition. Soon, we won’t need customer service representatives, we’ll all just talk to robots.

The main weakness of chatbots is that there isn’t much user data to build best practice on.

“The problem is the supposition that chatbots are all you need. The public likes problems being solved real quickly if it saves them time, and a slick interface that gets the simple issues like updating your address completed without having to speak to a human after multiple levels of ID check. But note the key phrase – slick interface.

“Early usage of chatbots has fallen flat because customers are very good at spotting fakery. If a bot starts to try and chat about that great football game last night when you just want to check on a delivery, the experience comes off as clunky.

“We need to think about how to introduce chatbots in the best way to ensure their success. That’s as an addition and a support for the contact centre, we would suggest – not as a replacement (or at least not for a significant period – and maybe not ever). Use chatbots to automate simple, repetitive tasks or as a back stop. Don’t ever force the user down a path they don’t feel comfortable with, and always look to maximise the customer experience!”

Nonetheless, Tom Ollerton, innovation director at We are Social, believes that some brands will feel that the best option, for the time being, is to steer clear of chatbots, and give the technology (and the public’s attitude towards them) time to mature and develop.  

“The main weakness of chatbots is that there isn’t much user data to build best practice on,” he explains. “Most people building them are doing so for the first time and are learning as they go. The first mover advantage has gone now; sensible brands will watch the space mature and then move in and build on the foundations of other brands’ learnings.”

Replies

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.