NPS is still the best CX metric we have! Here's why....by
Net Promoter Score has received a lot of criticism recently. But how reasonable - and accurate - are the complaints? Maurice Fitzgerald debunks some of the objections to NPS and explains why it remains the best customer experience metric.
I have been hearing and reading quite a lot of criticism of NPS recently. Unfortunately, only a minority of what I have heard and read could be considered reasonable. The majority of the thoughts expressed have been... (This is me struggling to find a polite word...) short-sighted. And of course, many criticisms originate from companies trying to sell more complex solutions.
First, please let me be clear about one specific thing NPS is the best CX metric we have simply because it is the most broadly understood, researched, and easiest to communicate. NPS and every single other survey-based metric can suffer the consequences of poor-quality survey methodologies. If you have biased survey processes, low response rates, and are not asking the correct people to answer, you will not get the results you need. And of course, if you do nothing to understand the input and take action both internally and externally, you are wasting everyone's time and money.
Now for some subtleties. Well, I don't find these subtle, but you may:
- The recommendation question behind NPS is the best revenue trend predictor of any single question in most but not all industries.
- This implies that other questions may be better than the recommendation question for specific purposes. The Customer Effort Score question has been compellingly shown to be superior for feedback after contact center events, for example.
- There are industries where other single questions may be better revenue predictors. Indeed, Reichheld and Markey use Enterprise Rent-A-Car is one of the main examples used in The Ultimate Question 2.0 and they used 'overall experience' as their ultimate question. In short, if you don't yet have the data that proves that a different question is a better brand-level revenue predictor, you should use the recommendation question as your baseline. You can study the predictive value of other questions as you go along.
A further thought is that NPS will continue to be relevant in the new world of predictive analytics using AI software. The software can take all of your operational data and determine which metrics have the strongest relationship with customer retention.
However, the results of this sort of analysis can be hard to communicate effectively. It is extremely helpful to use NPS to explain the relationship between the the most important KPI trends and the financial retention metrics. For example, you would show that customers tend to become Detractors if a KPI goes below a certain number. The behavioural categories in the NPS structure provide an easy way to communicate what the KPI trends mean and what their effect on financials will be.
Are the main objections to NPS legitimate?
Previously, I researched what those who do not support NPS have been writing over the years, and then wrote an article listing the objections I know about and what I think of each.
Here's a summary of these thoughts.
‘Compound metrics’ are better predictors of revenue and market share trends than NPS’
Yes, this is true in many cases. It is possible to design and implement more sophisticated measurements that are better predictors and a number of them exist. Reichheld and Markey have never claimed otherwise. Reichheld’s original contention was simply that the recommendation question was the best single-question predictor among those studied, for the majority of industries.
The issue with compound metrics and the reason none have ever caught on is that they are extraordinarily difficult to explain. And because they are not common, every single communication about the results has to start with an explanation of how the metric works. Too boring. Audiences go to sleep, no matter what the value of the sophisticated metric is.
The issue with compound metrics and the reason none have ever caught on is that they are extraordinarily difficult to explain
The recommendation question is not always the best or most predictive single question to ask
Yes, that is true and Reichheld says so in his 2003 HBR paper. Indeed, he refers extensively to Enterprise Rent-A-Car in his paper and the question they ultimately used and that turned out to be a great predictor for them was ‘Please rate your overall satisfaction with your rental’.
When eBay implemented NPS their target customers were people making their living selling things on eBay. They found that when they asked the recommendation question they got answers like “No, I am never going to recommend to a friend that they should compete with me on eBay.” So they implemented “How likely are you to continue selling on eBay?’ and continue to use that question to this day.
Lots of other things affect growth more than NPS
That is certainly true in some industries. Overall, Bain and my own research suggest that NPS scores predict between 20% and 60% of revenue / market share trends, depending on the company. There are clearly industries where external factors drive 100% of the revenue and customer satisfaction will drive market share only.
Take gold sales. Revenue depends on the price of gold. Market share will partly depend on how customers feel about you compared to your competitors. Where there are monopoly providers of essential resources like electricity, customer sat has no effect on revenue, as should be obvious. You can the relevant information from Reichheld and Markey here.
NPS is easy to game
Yes, that is true, and the same is true of every other metric like CSAT or Customer Effort Score. Here is a blog I wrote about how to cheat a couple of years ago: https://customerstrategy.net/gaming-survey-results/
Companies publish fantasy numbers as their NPS score in their annual reports
I wish that were not true, but it is. The only number they should publish is a double-blind competitive NPS benchmark score. Some companies choose to publish numbers that correspond to some positively-perceived subset of their business such as a support desk that only a minor proportion of their customers actually use. Again, the same self-measurement phenomenon could happen with CSAT or other metrics.
NPS is not a good predictor of revenue
While I will go into specific papers about the topic below, I want to make a few general points first. Not all NPS numbers are equal. The only scores that are consistent and reliable predictors of revenue and market share trends are double-blind competitive NPS benchmark trends.
Double-blind means that the people answering the feedback requests do not know who is funding it. Ideally the people analysing the feedback don’t know either, though this second ‘blinding’ may be tricky to achieve.
If, for example, the only NPS numbers you have are feedback from a support desk survey, you should not expect it to predict revenue or market share at all unless yours is simply a support desk business. And of course if your numbers improve but your main competitor’s numbers trend is even better, you will lose market share. I emphasise share rather than revenue because some industries are in decline. If your score is better than your competitors’ scores you may simply decline more slowly in that industry.
We did deep research on this while I was at HP and I have posted the top-level results of the relationships we found between relative NPS trends and revenue in various places.
Since then, Dr. James Borderick of Micro Focus has done even deeper research on the subject across more than 30 global software vendors and once again proven the strong relationship between NPS and market share trends. I don’t know exactly what level of detail he has been allowed to publish as his work does of course include some confidential company data.
Proof of the relationship between NPS trends and revenue / market share trends has never been published in a peer-reviewed journal
Not true at all. For example, here is one that Andrew Stephenson, Jana Fiserova, Geoff Pugh, and Chris Dimos had accepted for the British Academy of Management annual conference proceedings. It actually won best paper at their 2018 event. The data is from the UK-based multinational furniture retailer that has over $1B in annual revenue. It is quite a ‘deep’ read. To me, the essential point is their choice of using feedback gathered six months after purchase. This seems like a great way of getting a pure brand-level NPS number.
I would like to invite you to take the time it takes to read it, if you are at all interested in the subject. You will see that the relationship between NPS and revenue is established in a quite compelling way. The paper is here.
I suppose you can tell I feel strongly about some of these objections while considering others to be valid. If you have more that you would like me to examine, please feel free.
Retired VP of Customer Experience for HP and HPE's $4B software division.
Author of four books on customer strategy, all available on Amazon.
Please login or register to join the discussion.
NPS is an off-the shelf measure rooted in statistics, so show me the evidence.
However, be clear, this is a minefield, stats is as much an art as a science and I find it a shame that commentators do not give us the hard statistical facts.
For instance, Tim Keiningham's replicate work showed it was no different than CSAT in terms of predictive power (AVE).
I concur with this having reviewed 10s of thousands of data points put through structural equation modelling and seeing low scores on standardised AVE against spend and other hard variables.
(NPS for me shows all the characteristics of poor predictability you would expect from an off-the-shelf aggregate measure).
Likewise it is very highly correlated to CSAT.
Finally, it confuses correlation with causation.
In terms of its ontology, it is also flawed. To reference Dave Snowden and Cynefin, NPS only works where prior root cause is apparent - yet not all CX is predictable and mechanistic like this. The past does not necessarily predict the future.
Does that make it a bad measure - actually No.
It has cultural cache and an interesting dynamic, it encourages action where no prior relationship exists. However, we should be careful to treat this as anything more than a starting point. And avoid de-emphasising the value of qualitative research and ideation.
There is also one other angle, what people say and what they do (I recommend but have never actually done so) and less considered the difference between what people might say when they recommend and what they say in a survey about why they give a score (there is only a 50% relationship, so I might give a score of 0 out of 10 but with a friend I would say something positive).. this was shown in Ericsson research with Bharti Airtel I have previously highlighted.
In short, its a culture metric. And while there are aspects of root cause in data (especially transactional and negative scores) that do work well and demonstrate ROI, companies in their use of blunt instruments to help scale should not treat it as the one number to grow.
Good piece and I totally agree on the points made above about NPS and other scores, but the scores on their own just tell you what, not why. Anybody who sees an NPS score on its own without any supporting commentary on what operational initiatives the company has done to achieve such scores, might be best served to ignore the scores altogether and look at other metrics like churn.
Is the obsession with scores a product of past inability to quantify qualitative data (oxymoron I know!) into something people can easily understand and act on?