Numbers scores
istock

Why an operational customer satisfaction score is the perfect complement to NPS

by

By using human insight together with technology, companies can define an operational satisfaction score - providing a better platform for experience creation and improved NPS scores. 

11th Apr 2022

By using human insight together with technology, companies can define an operational satisfaction score (Op-SAT). Such a metric enables companies to reduce the probability of a low NPS (or other attitudinal) score from arising. In this way, companies provide a better platform for experience creation.

The process of obtaining the score also means companies can start to depend more on operational metrics when they measure CX and less on a survey. Finally, Op-SAT releases the value of operations in your CX endeavours and enables you to put a value on its impact.

Why Op-SAT rather then, say, NPS alone?

Seen from the perspective of IT operations, there are many aspects to NPS that are difficult to directly influence, such as brand and communications. However, there are also many aspects of NPS that can be directly influenced by technology. It is these direct influences that we capture through Op-SAT.

Of course, this in no way undermines the criticality of brand and communications, especially when this can overwhelm operational endeavours – stories are powerful things! Just that the focus here is on the more engineering aspects of NPS.

And what are these aspects? Well, a bundle of issues that operational metrics can flag and control. For instance, maintaining performance and red flagging when a customer operational threshold is breached, or making sure there is limited friction in the experience.

But here's the rub. Since CX scores must come from the customer, it stands to reason that we must also identify impactful operational experiences and their thresholds from the customer's point of view.

For instance, we might have established through our work with customers that we need to have a 97% benchmark threshold on delivery within 24 hours. Or we might have established the customer's concern over the number of clicks it took to get to a map. All operationally manageable issues, identifiable by the customer.

It's this customer input that is so critical to how we set operational satisfaction. And it's this that turns an operational metric into a customer-focused one. For instance:

If we were blind to the customer and said let's increase the threshold to 98%, we might assume an improvement counts when the customer couldn’t care less. If we had not conducted our UX research we might be blind to the impact of click-through rates on NPS.

Incidentally, we see this all the time. For instance, advisors telling clients that to improve CX they must pick up the phone 10 seconds quicker, or respond to a claim on first call when the customer just wants an accurate result and is quite happy to wait.

Another key aspect of Op-SAT is how by focusing on the avoidance of dissatisfaction and the maintenance of satisfaction we create a platform for experience creation. This refers to the Kano model of ‘must-be quality’, enabling better management of dissatisfiers.

Remember, we can only build an experience brand on the back of quality control.

Of course, dissatisfaction does not mean managing hygiene or must-be quality issues alone. However, for the sake of simplicity, I use the model to illustrate how operational satisfaction has an overwhelming focus on must-be quality issues seen through the operational lens of 'make sure it works', if not then 'more dissatisfaction' would arise; which would mean lower attitudinal scores and lower revenue potential.

Example:

  1. Operational management of video download speed to a customer-defined threshold.
  2. If this is breached then the customer is more likely to feel dissatisfied.
  3. This is reflected in an increased probability of a lower NPS score being recorded from the customer.
  4. Which also leads to a higher probability of churn or lowered use and spend.

 

Kano model

 

* Note: I don't agree with the Kano model in its prediction that satisfaction holds a linear relationship to attractive qualities. I believe satisfaction is bound by the norm of the product or service in question and subject to satisfaction treadmill effects. This means that while a rise in 'attractive quality' will temporarily boost a score to 9 out of 10, this will soon settle down to 8 out of 10 again (although the new 8 out of 10 score may well drive more value than before i.e., its less about the number than what it means).

How do we obtain an operational satisfaction score?

Start outside-in

We need to define the journey scope and personas before anything else. Then we follow this roadmap:

Firstly, we conduct close-in work on the experience using, for instance, UX research and focus groups/ in-depth interviews (IDIs) as well as expert insights. This work identifies pain points and opportunities. Remember, at this stage we are just looking to generate hypotheses on the experience, i.e. what are the pain points? And which ones hit the operational stack? Of course, this has to have some quantitative basis, but this can be achieved. For instance in UX research, 5-8 interviewees can uncover 80% of the issues.

We also obtain a baseline measure that relates the Identified operational issues to a baseline satisfaction score. So, we identify the journey and measure the level of customer satisfaction with it. Let's imagine this is 8.3 out of 10.

Secondly, we look at the operational stack to see which of these human insight derived issues can be measured and flagged in the technical architecture.

Thirdly, we look at how this insight might be applied to artificial intelligence and machine learning algorithms and the creation of a category library. We train the dataset with human insight.

Go inside-out

Then we go inside-out.

Fourthly, we gain hypotheses from AI/ML on which areas might impact human insight.

Fifthly, we identify these in the operational metric stack.

Design the new experience

Sixthly, we make changes to the journey experience based on these results and the earlier outside-in results.

Retest

Finally, we retest the new experience with the small sample group to see if there has been some effect on our baseline satisfaction score. Let's say they rate the new experience 8.5 out of 10. This is your Op-SAT score, giving an uplift through operational satisfaction of 0.2.

t1: SAT 8.3

t2: O-SAT 8.5

Value of operational changes within the same satisfaction measure is 0.2

Important note: I have issues with linear scale measures and would advise enriching these with quali-quant processes. In particular the metric 'more stories like this, fewer stories like that' (source: Cognitive Edge). This gives you both the number and the meaning while taking account of the cognitive science of human response. Further discussion on that is for another blog,

Ideally, we would also conduct a pre-post survey at a quantitative scale to ascertain the impact on NPS (and other CX metrics) of the manageable and scalable operational changes. Again, to derive an Op-SAT score to compare to your earlier Satisfaction score (note how we are still asking the customer to rate the experience in the same way as satisfaction, its just that Op-SAT means we are highlighting that some operational change has been made).

The important point is that these changes are now through the operational stack so apply over the whole estate in real-time without the need for surveying all the issues raised.

Interestingly, this raises the point that we can also value how much impact operations has on customer experience; which by implication means we can value how much impact non-operational issues are having on CX; that is if you can set up the relevant comparison sample in your experimental design.

Business-as-usual

We now have set up a test and retest structure for further work that infuses operations and AI/ML with human insight. So ensuring we impact customer experience KPIs operationally and in an ongoing fashion.

Critically, we see this effect through the Op-SAT transformation.

The challenge of this is cultural: to get operational and IT people to engage with customer research and insights. In my experience, that is a hard challenge.

This article adapted from a piece that originally appeared on the All About Experience blog

 

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.