Service and security lessons from HSBC's voice biometrics breach

Speech bubble
istock
Neil Davey
Managing editor
MyCustomer.com
Share this content

Are voice biometrics fundamentally flawed? That’s the question being asked after it was revealed last week that HSBC’s voice ID authentication service had been hacked by a BBC reporter and his twin.

Based on the premise that every person’s voice is unique, voice biometrics systems are designed to prevent bank fraud and improve the customer experience, removing the need for a lengthy process of identification, such as through a set of stock questions.

However, the BBC investigation claims there are vulnerabilities with the technology, after an account holder’s non-identical twin was able to access his brother’s account via the telephone after mimicking his voice.

So are voice biometrics systems not as secure as is suggested?

The consensus amongst many commentators is that in this case, it was the deployment of the technology that was at fault, rather than the technology itself. In particular, it was noted that the fraudulent twin had already attempted to access the account seven times before the successful attempt, and each time he had been declined.

So what can we learn from this investigation in terms of the deployment of voice biometrics?

  • No security system is perfect and should therefore always be used as part of a multi-layer strategy. Commenting in a blog post, Karl Roberts, head of propositions at GCI, notes: “The key lesson is that a single, or dual means of authenticating a user is never enough. Firms should always use a multi-layered fraud approach. This includes meta & network voice fingerprinting, behavioural characteristics, multi-factor authentication as well as biometrics. This ensures that no single factor is relied upon.”
  • Additional layers should be invisible to the user. If security questions are added and biometrics authentication applied to them, then there is no time saving or effort reduction for the bank or customers. Roberts gives some examples of ‘invisible’ layers: “So, for example, the bank should know the number you usually call from. It should also have an idea as to what time of day you usually call. Looking beyond just Voice ID, it is possible to track devices that the user commonly uses. So, if users log-in from a different machine, or a different country, there will be a further level of authentication that needs to take place.”
  • There should always be flags when access to accounts have failed several times, and a process in place for intervention after a set number of attempts. Roberts adds: “In this case the suspicious activity should have been flagged to a live agent (a person) before the eighth successful attempt.”
  • Twins/triplets etc., sometime referred to as ‘multiples’, pose a particular threat for biometrics, and customers should be encouraged to disclose that they are part of a set of multiples so that they can be separately flagged for additional processing, notes Ravin Sanjith, program director, for intelligent authentication at Opus Research.

With these steps in place, security experts insist that the performance of voice biometrics over the alternatives remains superior, both in terms of security and experience.

In his blog, Sanjith concludes: “This incident certainly opens all our eyes to the risks of voice biometrics, but it is vital that we take a breath and view this in the context of the specifics of the actual implementation, the alternatives and the overall scalability of the “experiment.” There is no ‘silver-bullet’ solution on the fight against fraud; and sometimes it takes a couple of very ingenious pranksters to remind us never to let our guard down.”

 

 

Replies

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.