Dice customer experience risk

Five risks you run by using behavioural science in CX


Behavioural science is fast becoming the Next Big Thing in customer experience, particularly in customer journey mapping. But there are reasons to be wary of it...

12th Oct 2020
Share this content

Behavioural science is fast becoming the Next Big Thing in customer experience, particularly in customer journey mapping. But while the benefits are being trumpeted, the risks – four of which are outlined below – are being overlooked.

There is also a fifth reason for thinking twice. When the business benefits of behavioural science were first identified, it wasn’t because the companies involved had set out to make behavioural interventions, they were a secondary benefit of doing other things well. If you take this approach – treat behavioural impacts as a side benefit of attentive, fair and authentic customer experience delivery – then you are giving yourself the opportunity to enjoy a double benefit. 

But if you make them a specific objective, you need at the very minimum to consider the risks below, because there is a good chance that you end up with the worst of all worlds – ineffective interventions and reputational damage.  

My experience with behavioural economics

My interest in behavioural economics goes back twenty years. One of the first papers I read on the subject was entitled Want to perfect your company’s service? Use Behavioral Science published in June 2001 edition of the Harvard Business Review (HBR). This was the first time I came across peak-end theory – the CX profession’s favourite behavioural insight.

Over the next couple of years, behavioural science became central to a book I was writing. My research included studying the original articles written by Daniel Kahneman and his long-time collaborator Amos Tversky, published in the 1970s and 1980s in journals such as Science and Psychological Review. Also subsequent academic papers they published with other collaborators along with the publications of other behavioural economists, both well-known – such as Richard Thaler and Dan Ariely – and less well-known.

In addition to including their insights in my 2011 book, Elusive Growth, I co-authored an e-book called How to counter the biases in business decision-making (link at end) describing the 70 or biases that had been identified by that time and the impacts that they could have.

Despite this research, I would not describe myself as an expert in behavioural economics – in good part because I know from studying it what a complex field it is. And evidence of my limitations can be found in my tripping up in a recent LinkedIn exchange on the subject of memory. (I saw the heuristics as not being a part of memory – ignoring that memory has both conscious and unconscious components with habits and heuristics being shaped by the latter.)

But I can claim to have a strong understanding of its implications in a business context – enough even to persuade Kahneman in 2011 that he and his co-authors had omitted mention of probably the most significant bias as far as business decision-making is concerned in their HBR article published in June of that year. And having followed it for some time, I can perhaps bring a little perspective to the discussion, particularly as with any new management fad there are risks which are overlooked amid the general enthusiasm.  

Risk 1: Creating over-complexity, reducing likelihood of implementation

Inability to operationalise journey maps is a trap that organisations can fall into. A beautiful journey is drawn but with no link to the organisation, people, culture, process, technology and data initiatives required to deliver it. As a result, no improvement is delivered.

This problem arises when the map itself becomes the focus – it becomes the end rather than a means to delivering a valuable customer experience. The more focus there is on the map, the greater the incentive to create one that is layered and complex. But complexity is the enemy of effective implementation.

Incorporating behavioural science is a prime example of focusing on making a great-looking and attention-grabbing map while making operationalisation harder – not least because those responsible lack sufficient understanding to implement behavioural interventions without causing harm.

Risk 2: Implementation leading to harm rather than benefit

A couple of years ago a leading CX expert – and one I hold in very high regard – wrote a piece enthusing about incorporating peak-end thinking into experience design, using it to recommend removing elements from a proposition to make sure the peaks stood out. As I pointed out to this person, such an intervention would likely trigger loss aversion among customers that would likely cause a lot of dissatisfaction. Intervening in an attempt to benefit from one behavioural heuristic created a problem via another.

Behavioural science is a messy area. As mentioned above there are at least seventy known biases and they are not distinct and separable – more than one can come into play at any one time. So unless you have studied them extensively – i.e. to Master’s or PhD level – and understand how they interact and their potential trade-offs, the chances of causing more harm than benefit are high.

Take the example of trying to create memories – a hot CX topic currently. One of the reasons I fell into the trap outlined earlier was availability bias – a recent experience with Direct Line (amending my car insurance) had been so good that I forgot about it immediately. It was a case of job done, onto the next task on my list. Of course, in my subconscious, a favourable memory had been created. But the separation of active from passive memories is a nuance that many are likely to miss – as I did. And the potential for harm is considerable.

Behavioural science is a messy area. There are at least 70 known biases, so unless you have studied them extensively and understand how they interact and their potential trade-offs, the chances of causing more harm than benefit are high.

For example, there is a widespread belief in the CX community that a great experience is one that wows a customer. Of course, there are times where we want to be wowed. Also there are moments of truth where we have a lot of emotion invested in the outcome and want a provider to step up and really look after us. In both cases there is scope to create interactions that we actively remember and talk about as a consequence.

The question is what proportion of the total do these types of interaction constitute? Probably more than 90% of the product and service interactions that we have every day are ones where we don’t want active memories. Ease, convenience and minimal stress – along with a reasonable price – are our priorities. I certainly don’t want someone trying to create experiences that I will consciously remember when I would prefer them to be instantly forgettable. But misunderstanding the different types of memory could encourage overenthusiastic CX practitioners to try to do just that.

As an analogy, imagine you are at a speed dating event (i.e. you have qualified yourself as a new partner prospect for other attendees). Now imagine that the person across from you spends your limited time together trying to ensure you remember them. They will probably succeed on one level, but the chances of you wanting to see them again are very limited!

All approaches are subject to a process of adoption, adaption and corruption – for the most part due to not understanding why some steps are in place and seeking shortcuts to reduce the effort involved. As a result, they cannot be evaluated solely by the value created if implemented as intended. The damage caused by mis-implementation needs to be considered with the risk of it increasing as understanding decreases. And when the area is as complex as behavioural science, the potential for mis-understanding and mis-implementation is very significant. 

Risk 3: Inefficacy of behavioural interventions

Even if you have a PhD in behavioural science, there is still a question as to how valid its general principles are for improving the experience of an individual customer.

Graham Hill (one of the most widely-read CX experts I know) highlighted recently in a LinkedIn discussion that believing behavioural interventions are effective requires us to make a number of assumptions, one being “that we understand the emotional make-up of customers well enough to create positive emotions through our often, clumsy actions.” He added the caveat that “I very much doubt that we are half as smart as we think we are.”

He also pointed out a second critical assumption: “that customers all share the same emotional triggers, so that a common approach to artificially 'emotionalising' experiences will always create the same result.” Again adding a rider that “common sense suggests this is far from the truth.”

The enthusiasm for behavioural science runs counter to personalisation. On a functional level, it is increasingly possible to understand each customer’s context, infer what they are trying to achieve, identify the component jobs they need to complete and orchestrate the individualised help that will enable them to achieve their desired outcome. But on an emotional level we are seeking to just apply some general rules and expect it to work.

Of course, you can brute-force personalised emotionalisation using vast quantities of data – assuming you have access to it - but that leads us to the next risk. 

Risk 4: Erosion of trust and reputational issues

Amazon, Google, Facebook, Twitter and others collect vast quantities of data about us. In conjunction with extensive A/B testing, they are able to develop detailed profiles of our interests, relationships and – to a certain degree - our emotional triggers.

The dystopian side to this is highlighted in Netflix’s The Social Dilemma which follows on from The Great Hack. For anyone who follows data ethics, the story in the documentary is not new. But because of its huge impacts – we are less able to agree on what is the truth, resulting in extreme polarisation on certain issues – the coverage of how our data is used to manipulate us is not going away.

Given it is a story that is likely to run and run, anything that resembles psychological manipulation risks becoming subject to ethical scrutiny and criticism. (In my view this is one reason why the UK government has distanced itself from the Behavioural Insights team set up to great fanfare in 2010.) In particular, using behavioural science to try to increase trust – as some have suggested – will be seen as especially duplicitous.

Once you create a capability that can be used to shape people’s behaviour, there is a very high likelihood that it will be used for purposes that go beyond its original purpose.

Of course, there is an argument that emotional triggers have been used in advertising for years with minimal brand damage. But with advertising we know we understand its intent and protect ourselves by donning a cloak of scepticism. When we are being served, our defences are down so we are more open to suggestion and manipulation.     

And you could also argue that we are CX professionals, our passion is looking after customers not exploiting them. But setting ourselves up as moral arbiters is just begging for trouble. Google and Facebook could say the same, and in their early incarnations it was very possibly true. But once you create a capability that can be used to shape people’s behaviour, there is a very high likelihood that it will be used for purposes that go beyond its original purpose.

The risk is that you end up with a worst-case scenario. Behavioural interventions that are not particularly effective – as the above sections suggest is likely to be the case – with guilt ascribed due to the perception of Machiavellian intent.

Solution: Make behavioural impacts an outcome of looking after customers rather than a specific objective

On rereading for the first time in twenty years the 2001 HBR article highlighted above, I was struck by the insight that none of the companies described had set out to deploy behavioural science for their benefit, it was just an outcome of other things that they were doing.

The most obvious example of this was Disney for whom shorter rides increased visitor throughput and reduced waiting times. The benefit of two 90-second rides feeling longer than one 3-minute was just a side benefit. Equally the ‘McKinsey grunt’ is a vocal expression of active listening – good practice for any consultancy. Also it is the nature of analysis-focused consulting projects all the data only comes together late in the day, enabling the key insights to be delivered at the end so the engagement finishes on a high. And the practice of getting bad news out of the way early is part of managing expectations – long recognised in the CX profession as critical for satisfying customers. (I view managing expectations as a crucial part of behaving with integrity and authenticity – in particular not overpromising so that you achieve a sale – rather than a behavioural intervention.)

Given the infancy of behaviourism in business at that time, chance playing a bigger part than deliberate intent shouldn’t come as a great surprise, but it has significant implications.

It is my belief that if you are curious about your customers – seek to understand what it is they are trying to achieve, what challenges they face and how they feel as they try to overcome these challenges – and create solutions that enable them to do what they want to do, all the while interacting with them honestly and fairly; then the purported benefits of behavioural interventions will accrue naturally. In the process you avoid the extra complications, costs and risks of designing behavioural triggers into your experience but still deliver a great one.

*   *   *

For those wanting to know more about my Kahneman story (or fact-check me!) – here is the June 2011 HBR article that he co-authored and the August 2011 blog on the HBR website where I highlighted the significance of framing bias and its omission by Kahneman and his co-authors. In November of that year I questioned him on its exclusion via an online Q&A held on the Freakonomics web site (see last question) and in his response he agreed on the primal importance of establishing the right decision making frame and admitted “We may not have emphasised this point sufficiently in the checklist we proposed in the Harvard Business Review earlier this year.” (Not emphasised sufficiently being a slight understatement as they didn’t mention it at all!)

The link to the e-book on behavioural biases that I co-authored can be found here. Two caveats – firstly the focus was on business decision making rather than consumer decision making, though I reference this book as evidence for what complex area behavioural science is and this point still stands. Secondly, I have changed my views on the implications of certain biases (my consistency heuristic coming into play), so suggest you focus on the potted summaries of the biases rather than the sections on their implications.

Finally I blogged on LinkedIn about why I thought my ‘memoryless’ experience with Direct Line was so good and how it contrasted with received wisdom of what a great customer experience looks like. If you want to know more you can find that here.  

Replies (1)

Please login or register to join the discussion.

Peter's Profile Picture
By Peter Dorrington
15th Oct 2020 15:17

Hi Jack,

as always and thoughtful and thought provoking article, but I have a few points I would like to share.

Human behaviour is indeed messy and complicated (I think the count is now up to 150+ cognitive biases, but some of that can be attributed to splitting hairs over definition). Nonetheless, businesses that do understand the motivation of behaviour can make proactive design decisions that better serve the behavioural and practical needs of customers. You don't need a PhD in Behavioural Economics for that - curiosity, reading and asking a few of the right people will take you a long way.

Behavioural Economics *alone* cannot explain every decision, but it can help understand some or part of them, this is also why 'nudging' can also be somewhat hit-or-miss: how I feel about a product is important, but not the whole story.

As to the ethical implications (e.g. dark psychology) - that's down to us, not the science: Once a capability exists, it will be used - no point saying "we will not do X because it might be misused". If a business choses to misuse it and gets caught, they will (rightly) suffer the consequences, but if an organisation uses it ethically to legitimately help customers, they will reap the rewards.

There are also risks in focussing solely on the most efficient way of achieving an outcome - as the recent research from MyC shows - empathising with customers (not just being nice to them) delivers far superior outcomes for both the customer and the business, but you can't empathise without listening and understanding.

Continuing with efficiency for a moment, lots of organisations are now focusing of effort in the belief that 'low effort = good service'. However, when a customer really values something, effort can become a secondary consideration. Indeed, in many cases, it is part of the experience (think about how some companies lavish attention on complex packaging - unboxing becomes part of the ownership experience). Again, it's not the only consideration.

To end, Behavioural Economics is not the magic bullet some promise it to be, neither is it a short cut to customer experience excellence, but I would argue that it is worth the effort to understand more about why customers do what they do, so that you can help them get what they want, not just what they need.

Thanks (1)