Why we should beware the rise of the chatbot
Handing over the blog reins to PeopleTECH’s resident chatbot expert and principal consultant, Maurice Vink…
Mark Zuckerberg announced last week that Facebook was launching a drive to create chatbots and AIs capable of perceiving reality better than we do. Depending on your perspective, this either heralds the dawn of a new age or takes us into a nightmare created by a group of hubris-riddled technocrats.
What I found most interesting was Zuckerberg’s forecast that chatbots will replace customer service assistants within the next decade. This may sound ambitious, but both Facebook and Microsoft will be pouring hundreds of millions of dollars into research. If this is to happen, what customer experience barriers will need to be overcome?
Many of us already get frustrated by everything from satnavs not recognising where we want to go, to autocorrects that because of a one letter mis-spelling, change one word to another. Natural language recognition has come on a long way though, so it should be easy to have a conversation with a chatbot.
Some functions - checking store locations and opening hours, and balance and transaction checking – are all routinely available on websites and should be easily portable to a newer technology, delivering an equal or better customer experience.
It starts to get more problematic where there is a lot of duplication and variety of spelling, such as in names of places, hotels, restaurants and even flowers, to use some of Zuckerberg’s examples. Most of us struggle with the spellings and pronunciations of foreign words and place names, so disambiguation will be very difficult. How many times will we put up with “Did you say..?” and “Did you mean..?” before we give up in disgust, leaving some poor contact centre agent to try to calm down a frustrated customer?
These questions have already caused a number of businesses to abandon natural language call routing and deflection front-ends in their contact centres because of negative customer feedback. We can only hope that the chatbot developers target the less intimidating use cases first.
Remember the drop-through option
Just as well designed telephone menus routing calls into contact centres have drop-through options for callers who cannot handle or do not want automated routing, chatbot applications will need to be designed with exemplary exception handling. We have all been caught in frustrating loops where we are returned to a higher level menu: while it is frustrating we can usually navigate a way through. But what if we are conversing with a chatbot?
We will need to design our applications so that users can shut down a pointless dialogue and be transferred to a live agent, at least until the chatbots exceed our own intelligence level! The implication of this is that enterprises not already offering a human to human chat solution will need to put one in place. This will enable improved customer experiences in the short term while preparing for the longer game.
Data protection and privacy
Why would we be any more worried by chatbots or AIs? Interactions with chatbots that simply replace a few swipes and taps on our smartphone probably would not phase many of us. A slightly easier to use interface would prove popular, although not very beneficial for either the user or the business. And we would not be handing over any new personal data. Or would we?
With security agencies across the world showing ever more desire to track and trace phone calls, messages, credit card usage and emails, it will not be too long before the FBI or MI5 demands access to chatbot conversations, conversations that will have replaced our ‘dumb’ interactions with online booking and shopping systems.
Even the most law abiding amongst us will take a deep breath at that point. How might we feel if our favourite pizza delivery company made chatting to a bot a condition of doing business with them?
Chatbots chatting with chatbots
The more paranoid among us will also start to speculate about what will happen when chatbots and AIs start interacting with each other. If as Microsoft say, virtual assistants and chatbots become more capable over time because they use machine learning to discover how to serve us better, it will not be long before they start gossiping amongst themselves, sharing information to improve our lot.
Pseudo-anonymisation is a technique whereby all the data that can be used to identify an individual is replaced by random data. However, as AOL found out to its cost in 2006 when 650,000 users were laid open to identity theft, inference attacks make this technique dubious at best.
We will need to be vigilant to ensure that pseudo-anonymisation is not being used when our customer data is released into the cloud, and that chatbot conversations and AIs are used responsibly.
Mark Zuckerberg spoke about chats initiated by us, the wetware. But we need now to start thinking about a time when the robots start the conversation. We have got used to our inboxes and voicemails being flooded with spam, and as a matter of course we reject calls from unrecognised numbers.
Yet while our legislators have taken decades to pass only marginally enforceable laws to protect us from unwanted inbound calls, and even then only within their own sphere of control, the pending arrival of chatbots heralds another wave of annoying unsolicited ‘conversations’.
There is undoubted potential for chatbots to improve and enrich the customer experience, but there are lots of issues to address first. That multi-million dollar research budget is going to be working overtime!
Mike Hughes is MD at PeopleTECH and one of the UK’s foremost customer experience experts, having worked for and consulted companies such Thomas Cook, BskyB and France Telecom on how best to deliver a first-class experience to their customers.