In a recent article on Business Over Broadway Bob Hayes argues that recommendation scores can be calculated as Mean, as well as Top box and Bottom box - each providing a different actionable insight. According to research Bob quotes, the 'Net' version of the score is hardly the most precise and reliable one and can be replaced - or at least supplemented - by other metrics.
Below is my mindless chatter in a LinkedIn discussion on the article:
Very valid thoughts on the different ways to calculate scores and what they tell us about customer attitudes. I agree that a single metric can be misleading and using more than one is always more revealing, aids more informed decisions.
Since the emergence of the NPS 'school' and 'industry' I have observed one interesting departure from the underlying science:
1. All the research by the godfather(s) of 'the ultimate question' had found a correlation between recommendations(!) and loyalty.
2. Why then 'promoter' studies do not look at recommendations but have, instead, chosen the (very uncertain) 'likelihood' to recommend? That's intent, not fact and a very slippery substance to base decisions upon. (especially when decisions involve multimillion budgets and decision makers are sold NPS as 'the only metric they ever need')
2. Wouldn't it be more pragmatic - and also more scientific (not to mention, more reliable and accurate) to base promoter scores on actual recommendations, rather than declared intent?
Yes, it may seem more difficult to detect / record actual recommendations, but the score reliability would be worth the effort to capture data. The simplest method is through referral reward programmes ('member get member' and the like), but there are various other solutions, specific for each industry and business.
Some of the better companies have taken the time (and investment) to build such capabilities - and are reaping the benefits. Others just ask 'Would you recommend us?' - and are delighted to hear 'Yes, I would' (but will they, really? Do they?)...
Even where documenting and storing for analysis the actual fact of recommendation is impossible (or impractical) - and insight has to be obtained by surveying 'samples' of customers, there are better ways to formulate that 'ultimate' question.
One could, for example, tweak the question in two ways:
(a) Changing the tense form future to past (fact, rather than intent), and
(b) Turning the binary Y/N into a specific quantitative question ('how many'?) Thus if we ask:
"To how many people have you recommended us in the past X months?"
- most respondents would pause and try to remember and count instances they have done it. When faced with numeric questions, most people try to be accurate and do not deliberately lie or even wildly improvise. And the impulsive urge to please ('yes, I would recommend you') has much less of a chance to distort the response.
Scores will almost certainly be lower than with 'likelihood', but that shouldn't worry managers because they are just different metrics (apples and oranges). I like to think the past-tense numeric data would be more accurate, reliable and decision-supportive....
- - -
Just some random thoughts - ignore them if you believe in NPS as sold.
Replies (0)
Please login or register to join the discussion.
There are currently no replies, be the first to post a reply.