Share this content

Customer experience smackdown: Customer Effort Score vs NPS vs satisfaction


MyCustomer examines the Customer Effort Score as a measure of customer experience - and looks at how it shapes up next to Net Promoter Score and customer satisfaction.

12th Apr 2012
Share this content

Using recommendation or satisfaction-based techniques to measure customer loyalty is a well-established practice in the marketing world, and not without its controversies. Fred Reichheld’s Net Promoter Score (NPS), the first of such measurements, determines customer loyalty based on the question ‘Would you recommend the company to others?’ and hailed by Reichheld as the “single most reliable indicator” of company growth. It enjoyed a stint of popularity in the 1990s before becoming heavily criticised and moving over for another measurement.

Customer satisfaction measurement (CSAT) determines how an organisation meets the expectations of its customers based on satisfaction. Customers are asked a question based on how satisfied they are with the company following a transaction which is then rated from one (very dissatisfied) to five (very satisfied), although has also fallen under criticism, mostly for its lack of detail.

Customer Effort Score

But now there’s a new measurement on the block that once again has marketers divided - the Customer Effort Score (CES). Behind the measurement is US research and advisory firm Corporate Executive Board (CEB), which began its research back in 2008. Lara Ponomareff, research director at the CEB’s Customer Contact Council (CCC), who undertook the five-year study alongside Anastasia Milgramm and Matthew Dixon, explains that as products became more commoditised, customer service emerged as the differentiator.

Therefore to examine the link between customer service and loyalty, Ponomareff and her team conducted a large scale study of contact centre and self-service interactions. Respondents were asked if their expectations were not met, met or exceeded to determine future loyalty. “We found that in the difference between meeting and exceeding expectations there was no discernible increase in loyalty behavior. The biggest increase came from going below to meet,” she explains.

The research concluded that what customers really want is simply a satisfactory solution to the service issue rather than to be “delighted” by over-the-top customer service experiences, as previous measurements assumed. The study surveyed more than 100 customer service heads - 89 of which said that their main strategy was to exceed expectations – and found that a staggering 84% of customers who had experienced over-the-top interactions claimed their expectations had not been exceeded.

Additionally, the report revealed that such excessive levels of customer service levels (such as offering free products or services) will only make customers slightly more loyal to the brand. In a move away from CSAT, which bases customer loyalty on satisfaction, Ponomareff explains that the research revealed the smallest link between satisfaction and loyalty. The study showed that 20% of the ‘satisfied’ customers intended to leave the company in question whilst 28% of the ‘dissatisfied’ customers intended to stay.

So to achieve customer loyalty, rather than exceeding expectations organisations must reduce the effort that customers exert to get their problem solved. Simply, companies must remove obstacles.

Implementing CES

The research identified several common customer complaints relating to effort including having to switch from the web to the phone, calling back a second time to resolve an issue and being transferred. To combat these obstacles, there are a number of tactics that every company should consider when using CES, said Ponomareff. Companies must arm call centre agents to address the emotional side of customer interactions, minimise channel switching by increasing self-service channel ‘stickiness’, use feedback from disgruntles customers to reduce effort and empower the front line to deliver a low-effort experience.  

In comparison to NPS and CSAT, Ponomareff argues that although NPS is a great relationship level metric and a great compliment to CES (in terms of how the customer feels overall and their willingness to recommend), and the same with CSAT and customer satisfaction, neither are as strong a prediction of loyalty over time as CES.

Despite this, Ponomareff is not advocating that CES as a measurement tool is used in isolation but that it should be integrated with existing company strategies and continued to measure over time. “A low effort company insisting on a low effort approach doesn’t mean dumping your current approach,” she says. “What it does mean is effort needs to be at the centre of everything you do and you need to refocus your current initiative around low effort – you want to think ‘What’s the impact on customer effort here?’”

The magic number?

So how does CES compare to NPS and CSAT and can it be used as a standalone measurement tool?

Bruce Temkin, chair of Customer Experience Professionals Association, believes the concept of CES is broadly a good one but doesn’t necessarily replace others or not any more of a panacea than NPS.

“NPS is a relationship type of question which is asking inherently based on what do you think of me as a company, will you recommend me? So it incorporates a bunch of things that go into deciding whether a customer's going to recommend you. The question itself is different than CES because the effort is closer to a satisfaction question. Satisfaction is really trying to point out how you feel about a specific company or how you feel about a specific interaction as opposed to how you feel about the whole company. So the CES could be structured as a satisfaction question - how satisfied are you with the effort it took to deal with the company?

“The actual implementation of any of these is how do you calculate the scores? NPS has their promoters, detractors and passives, there's 0-10 scales. CSAT has a whole bunch of different implementations around it, some do similar netting like NPS whilst some do average scores. And so the actual implementation of those makes them quite different and quite unique.”

He adds: “There is no ultimate question, none. And that's because one, every business is different and they need to get feedback that's appropriate for their business, and two, the value is not in asking the question, the value is in taking action based on the insights that you find.”

When asked to compare CES to NPS, Morris Pentel, chairman of the Customer Experience Foundation, argues that this is impossible as they measure two very different human drivers. Like Temkin, Pentel agrees that there is no single measurement or question that can give you the answer you need, only a combination.

He says: “There are a number of different factors in terms of understanding your relationship with your customer. There are voice of the customer (VoC) pieces, the use of tools like surveys that aren’t questions like NPS and CES, there are also a range of other things from focus groups through to mystery shopping. There's a whole range of things that the modern organisation has at its disposal and needs to be using in order to effectively understand their customer relationships.

Although they both agree that one question cannot measure satisfaction or loyalty, Temkin adds that there are a set of common actions companies face providing they are in the same sector – for example, a retail call centre sourcing customer feedback. However, moving past that neither NPS, CSAT, CES nor any other single question can by itself help address the problems that are specific to your business and its environment, he concludes. 

Replies (6)

Please login or register to join the discussion.

By [email protected]
12th Apr 2012 10:02

Agree with everything said in the article above especially comments regarding the lack of overall measurement tool which ticks all the boxes. The use of multiple measures is the only way to really understand CE progress. The building of a dashboard or something similar to a BSC is a good way of doing this and showcasing the data in a more valuable way. The biggest issue is getting organisations to prevent themselevs becoming obsessed with a survey number and chasing detractors whilst at the same time having no real understanding of what the vision or pioneering stage of CE looks like. The question I'm continuing to ask is whether we actually understand the relative values of a promoter and detractor

Thanks (0)
Shaun Smith
By Shaun Smith
12th Apr 2012 14:21

 I agree with Paul Roberts that no one metric is going to answer all situations and that you need to build a experience dashboard based on what your target customers truly value.

For example, if I am a bank or energy customer I will probably prefer to expend as little effort as possible in the relationship because it is transactional, however, if I have an interest in the product I might go to considerable effort to experience it because that effort becomes part of the value. For instance, LEGO enthusiasts willingly devote time to creating new LEGO model ideas and posting these on the website, GiffGaff users are happy to create videos in their own time to help promote the brand, Harley Davidson users are happy to go to the effort, expense (and pain!) of having the brand tattoed on their body. All of which to say that CES is a blunt instrument unless you know to what extent effort is willingly given and part of the value equation for your target customers.


Shaun Smith

Thanks (0)
By pierman
13th Apr 2012 07:50

 Shaun's comments make a lot of seese to me.  In the work we do on customer experience we certainly see diversity in the level of effort that customers voluntarily apply to different purchases.  We need to avoid any one simplistic measure & get an in depth understanding of what the customer expects/wants


Thanks (0)
By gar1969
16th Apr 2012 15:30

There's so much truth in this article. Many products have become commoditised and customers are expecting good service first time, every time at every interaction. I talk all the time that consistently giving the right answer first time is the key for the vast majority of customers. For me it's what differentiates service and experience, you can give the customer the wrong information wrapped up in good service but it's still the wrong answer and thus a bad experience.

The key element in all of these measures is how you use them internally to drive your customer experience. Understanding what customers are telling you is critical to making people in the organisation accountable for their part in the customer experience. We try to tie the scores to the verbatim comments we get from customers by classifying each comment into a feedback system that is internally focussed at allocating accountability to departments for improvement. We also tie this to our complaints which we cost internally but also from a customer perspective in order to get people to understand what it costs for the customer to do business with us. It's allowing us to change the approach that other departments have to our customer experience and they are seeing the role that they play.


Thanks (0)
By Alison Bond
01st May 2012 15:25

For the last eight years we have developed an answer to this which we now use in many large organisations and that it to measure in the benefits space.  We work with the organisation to understand what the key benefits their customer expect from them and measure those.  For example if someone is buying a pension then they want to feel secure about their financial future therefore we measure how well this is being achieved by the business.  We call them Halo measures because they measure the halo effect of the service being provided.  This way there is no wastage in chasing satisfaction targets which are generally too broad to be meaningful and the staff delivering the products or services also know how far or not they are achieving what is required.  As an aside we measure satisfaction and for the organisations who consistently persure strong Halo figures their satisfaction increases signficantly.  One long term case study has taken satisfaction from under 50% to 75% in three years and they have many thousands of staff.  If more organisations measured this way it would make the World a better place.

-- Alison Bond

Thanks (0)
By Nicky
10th Aug 2012 11:18

We are looking to ask our customers for a self-rating on their customer effort. We want to do this on a 7-point scale (to maintain consistency with the other questions). Now we were wondering whether we should provide each level with a verbal label or only label the two extreme answers? It struck us that basically all articles about customer effort score only label the two extremes (very low effort - very high effort). But wouldn't it be better (more reliable?) to label all seven answer levels?

Thanks (0)