How do you prepare your contact centre for the voice recognition revolution?
The likes of Siri, Alexa and OK Google are profoundly changing our expectations of voice and speech recognition technology in the contact centre. What is your business doing to ensure it keeps up with the pace?
Automating voice has been the desideratum of factions of the human race since sound recording was first cracked in the 1800s.
But it wasn’t until the 1970s that automated telephony became a reality, when the Bell Laboratory unveiled a new computer that was so sophisticated it could “not only verify your identity, but talk to you in plain English”.
The results of Bell’s work seem somewhat primitive in hindsight, but at the time they ushered in an era of Interactive Voice Response (IVR) technology that increasingly incorporated speech recognition into its offering.
Yet IVR has been much-maligned for its subsequent failings, its nadir being just a few years ago when one man made it his mission to rid an entire country of recorded phone menus.
Nigel Clarke’s distaste was channelled towards HMRC, the UK’s tax office, whom he claims at one point had an IVR consisting of 74 menu options across 6 different levels which could take up to six minutes to navigate. His angst wasn’t uncommon. Up until very recently, ask any member of the general public what they thought about automated voice and speech technology and you might have been excused for running a swear jar to coincide with their responses.
So what’s changed?
It’s hard to reconcile public opinion when confronted with automated systems like HMRC’s, but change is in the air. Driven by voice assistants from tech behemoths like Amazon and Google, speech and voice recognition technology is experiencing a renaissance.
The impact on the contact centre and its relevant connection to IVR is not immediately obvious, but don’t be fooled; it is profound. Not only is our approach to customer service changing through the use of voice recognition devices in our homes, phones, speakers and cars, but our perception and expectation of automated speech technology is changing as a result.
Driven by voice assistants from tech behemoths like Amazon and Google, speech and voice recognition technology is experiencing a renaissance.
This is making us more accepting of automated telephony in contact centres, but anything similar to Bell Lab’s ‘she sells seashells’ attempt just doesn’t cut it. Siri and Alexa are setting the agenda for how all voice and speech recognition technology should work for us.
“Speech has exploded over the last 5 years – looking back just a few years, speech was a difficult subject matter; speech recognition engines weren’t that good and it detracted from a good CX,” says Jon Meredith, sales director, IFS | mplsystems.
“Now those engines have improved and can deliver information 24/7. This is an opportunity businesses shouldn’t ignore.”
Working in the background
IVR is currently just one component of an increasingly sophisticated suite of voice and speech recognition technologies available to contact centres.
There’s the customer-facing technology – the voice assistants, IVR – and then there’s elements that are working in the background: speech analytics, voice processing, pattern recognition, voice biometrics and emotion detection are all part of an ecosystem that is aimed at making the experience of interacting with a brand much efficient and effective for consumers.
Then there’s the rise of augmented support – speech recognition ‘robots’ that work in assistance with contact centre agents to monitor interactions with customers and put forward useful information in the heat of discussion.
All of these technologies require the processing of vast quantities of data, but as Meredith explains, volume isn’t the issue for contact centres.
“Most organisations have a lot of information they can use – 100,000s of emails sent to you as a business, live chat sessions every day, social media interactions, and of course the voice through calls. There’s too much in many cases.
“It’s all data for analysis that you need to work through to establish which transactions are high volume but simple to execute as part of a possible automated speech recognition programme.”
Speech engines have improved and can deliver information 24/7. This is an opportunity businesses shouldn’t ignore.
One obvious example is when a customer faces the situation in which an engineer – from, say, a utility or telecoms provider – is due to visit their house, and they have had a confirmation that the appointment will happen at a set time and want to double-check it will go ahead.
This occurrence, says Meredith, is a common one that continues to lead to phone calls into contact centres, but will increasingly become one that will start with a voice assistant query via Siri et al. That should in turn lead to a voice application connected up between the web and the contact centre and the business in question’s historic data, to ensure that the query is simply automated, and resolved, through a speech engine.
“Speech is actually already pretty good at these single question/ answer exchanges. The step beyond that is also a simple process – actually changing appointments, that sort of thing – and speech tech is getting better and better at resolving that type of process too.
“Any business keen to start investing in speech recognition technology should concentrate on these simple transactions that can be automated quickly, and leave the complex interactions to humans. As your contact centre develops automation in these areas you’ll start to get a map of all your processes, the outcome of interactions placed in categories, the reason for a call – that can be matched against the complexity of those contact types.”
Understanding natural language
At the heart of all of the recent developments is natural language understanding (NLU), a branch of artificial intelligence that underpins the pioneering work going on with voice recognition technology.
NLU uses algorithms to “reduce human speech into a structured ontology”, pulling out the relevant intent, timing, sentiment and geographical reference from speech to create more measured responses either by text or, more often, voice. It’s this technology that allowed Google Duplex, an AI, to have a conversation with a real-life hair salon and actually book an appointment, in May 2018.
Google Duplex has tapped into an almost unquantifiable scale of data, but it is something that can be replicated at a lesser scale in the contact centre.
“The key is to work with technologies that can put AI and NLU into ‘listen’ mode in your omnichannel queue – that way you can be training it and monitoring how good it is at solving problems, rerouting problems without it being in a live scenario,” says Meredith.
“The way systems often work is they’ll come up with a ‘confidence level’; an email will come into a business and an AI will look at it, provide a confidence ranking for whether it can deal with the query, and allow you to decide manually whether it deals with the problem or a human takes over.
“That confidence level is built up on pattern matching engines looking at previous information and interaction history to establish what it is able to cope with. It increasingly works for voice too. Therefore, as a business, you need to be jumping on this movement now, in order to be able to deal with customer expectations in 2 to 3 years’ time – maybe less!”
The trouble with data
Of course, connecting up the relevant data in the contact centre is a perennial problem, as many leaders in the field attest.
“In every business I’ve worked in, data is spread across multiple systems, or it’s on a piece of notepaper or being stored in someone’s head,” adds Meredith. “Knowledge capital is spread all over the place – even if you have a powerful CRM and a strategic CRM programme, you can guarantee it won’t contain everything.
“Contact centres are fast-paced, they move on from things quickly. You can’t wait a year for a new CRM programme, you have to resolve things in the moment, and that sometimes means good data gets lost.
"What is needed is an approach that quickly allows remote data sources and local data to be stored and combined in real-time - we call it 'CRM fusion'. This can be presented to the customer advisor or consumed by the NLU/AI system in a seamless and consistent way. It is important that a human advisor can be brought into the automated processes at any time, if needed, without disrupting the customer journey.”
As a business, you need to be jumping on this movement now, in order to be able to deal with customer expectations in 2 to 3 years’ time
It’s an environment that’s rapidly changing, but one that won’t let up. Customer expectations continue to build – there are already over 1 billion voice-enabled devices switched on across the globe and our use levels rise by the day.
Contact centres form a key part of the voice recognition revolution, but failing to jump on the bandwagon now may mean succumbing to the scorn of miffed consumers such as Nigel Clarke with his HMRC gripe, further down the line when consumer expectations have reached their peak.
Chris is Editor of MyCustomer. He is a practiced editor, having worked as a copywriter for creative agency, Stranger Collective from 2009 to 2011 and subsequently as a journalist covering technology, marketing and customer service from 2011-2014 as editor of Business Cloud News. He joined MyCustomer in 2014.