Am I bot or not: Are brands wrong to design chatbots as virtual humans?

Robot
Share this content

Science fiction has always been one of my favourite ways of exploring the future. It inspired me to get into technology in the first place. One of the most influential films in my life was ‘Blade Runner' (scarily released over 30 years ago).

It took good science and extrapolated it forward in a fascinating, thoughtful, and challenging way - as does its long-awaited sequel 'Blade Runner 2049'. It explores the old sci-fi trope of our creations superseding us, and what it really means to be human.

Creating artificial humans that are indistinguishable from real humans is a problem in the ‘Blade Runner’ universe.

Many of these replicants have been deployed in the so-called “dull, dirty, and dangerous” professions that real humans don’t want to, or can’t, do. But they can only simulate some of the qualities of real humans.

Emotion is a problem for them – something that the infamous ‘Voight Kamff’ machine can detect through changes in respiration, heart rate, blush response and pupil dilation. It is the equivalent of a ‘Turing Test’ for emotion.

Flash back to today – and, despite the first ‘Blade Runner’ being set in 2019, we are still very far away from creating an artificial human. Even the Turing Test, proposed by Alan Turing in the 1950s, has yet to be convincingly passed. This is the challenge that he lay down to technologists to create a machine that would be indistinguishable from a person in conversation.

There have been a few close calls – but nothing has yet convincingly passed. This is because human language is often anarchic, ambiguous, redundant, and chaotic. It was never meant to be read by a machine, it evolved to create relationships between humans.

Significant progress has been made in voice recognition and natural language processing technologies in the past few years. Error rates are reportedly less than 5% now – which isn’t far off human recognition. However, understanding the meaning and context of these words can be far more problematic.

Chatbot challenge

The expectation that science fiction has created is that we can ask these machines anything. Most current bots only work well in a narrow and predictable domain – where they have stable and well-defined data to feed off. The gap between “you can talk to me about a few things” and “ask me anything” has been the issue with many of the more generic virtual assistants that are on the market – and this often ends in customer frustration. To design these things effectively, you firstly need to know what the customer might ask.

However, the customer also needs to know what they can ask.

This is the challenge for chatbots, which are increasingly being deployed for customer service. Bots can act as “IVR for digital” and are a particularly useful front end to chat – a channel that is growing in preference with customers. They are the online equivalent of the “press 1” steering system in voice contact, gathering information about the customer, triaging them, directing them to solutions, and, if all that fails, precision routing them (based on that information) to the human with the right skills to solve the issue. Many bots constrain the conversation to their narrow field of expertise, often by offering multiple choice options.

Why attempt to create a virtual human, with a line in witty repartee, and a name, when it is blatantly not human?

Where the human has the advantage over the algorithms that power these bots is that they are far better at understanding context, emotion, and sarcasm. For example, this tweet was put through a sentiment analysis engine and came out as positive – “thanks to [a UK train operating company who will remain anonymous] for my free sauna this morning”.

The problem is that the machine has never travelled on a train, and would need to be taught that a sauna is not a pleasant thing to have on one. Through knowledge of both trains and saunas, we know that it is a bad thing.

There are possibilities to help the machine recognise emotional context. Just like Blade Runner’s Voight Kamff, it is perfectly feasible to take biometric data, such as tone of voice, or facial expression to give the machine more of a clue as to how we are feeling. Knowing that, and then doing something appropriate with it can also be problematic for the machine. Should a particularly angry customer be directed to your most expert advisor, or be cut off for using abusive language?

Bots are also more likely to be the ‘victims’ of abuse – as customers take out their frustrations on something that they know is not human. It probably needs to be made clear to customers, when they are eventually transferred to a real human being, that they are talking to one.

Given the inherent issues with creating human-like machines – one big design question is whether it would be better to design machines that act like machines, rather than attempt to make them more human than human (to quote ‘Blade Runner’ again)?

Why attempt to create a virtual human, with a line in witty repartee, and a name (is it not strange that most of the current digital assistants are female), when it is blatantly not human? Why not harness the power of the machine to help customers solve their problems, but ensure that a well-informed and empowered human can take over when it fails?  

That is the power of “augmented”, rather than “artificial” intelligence. Then the bigger issue may be human customer service advisors that that behave more like robots than robots!

About Nicola Millard

Replies

Please login or register to join the discussion.

avatar
09th Nov 2017 09:39

This comment posted on the MyCustomer LinkedIn community by user Ed Basden:

Sometimes you have to get ahead of the curve. Yes this needs to move forward and humans will eventually adapt

Thanks (0)