Cynefin framework: Find the right CX measurements to use by solving a major metrics misconceptionby
The standard approach to measuring CX is flawed because off-the-shelf measures such as NPS assume that all types of customer experiences are the same. Here, we examine a framework that enables CX professionals to determine what type of experience we are dealing with and what type of measures to apply.
Customer experience (CX) is a complex mix of what customers think-feel and do. So, when it comes to managing CX, which metrics should we use?
Faced with such complexity, the standard approach to measuring CX is to use either an off-the-shelf measure such as Net Promoter Score (NPS) or build a data lake: where data engineers and AI define what is salient or not.
However, these approaches are only partially correct. Why? Because they incorrectly assume that all customers' thoughts, feelings and actions exist in an ordered, engineering-type system, when the truth is you cannot manage feelings in the same way as you would repair a car.
And if you do, you risk programme failure.
It is the contention of this article that what is required is a shift in thinking. CX professionals need to start overlaying the Cynefin framework onto their mapped experiences. By following this approach, CX professionals will be able to determine the right metrics for the right system, explain why they need a bundle of metrics, and define where hidden value might lie.
"Tell me what the experience is, and I'll tell you what to measure"
The Cynefin framework (see the diagram below) is a problem-solving tool developed by Dave Snowden, based on concepts from knowledge management and organisational strategy. It helps you to put situations into five domains defined by cause-and-effect relationships, which in turn helps you to assess your situation more accurately and respond appropriately.
Source: The Cynefin Co.
Here's how the Cynefin framework can help you with CX metrics.
Ordered system: Clear metrics
Clear metrics relate to situations where the measurement outcome is binary and clear-cut - even if the execution is not. They cover an estimated 25% of experiences.
Example experience metrics:
- As a surgical assistant, you hand instruments to the surgeon. You are targeted with 0% of instruments being left in the patient at the end of an operation.
- As a network operator, you can see the availability of a mobile signal. You are targeted on 99% availability across the network.
- As a courier, you can track the success of package delivery. You are targeted with 97% of packages being delivered on time.
Success is measured through binary metrics: "I counted them all out; I counted them all back again"; "I reviewed the operational data, and we achieved our aim or not."
Customer and business value is easy to demonstrate:
- If an instrument is left in the patient. they risk serious injury or death.
- If a network is down, customers cannot make a purchase.
- If a package is not delivered, a sale is not concluded, and customers will not come back again.
CX measurement mostly involves dashboarding the right operational metrics. This means that CX professionals are less dependent on surveys since operational metrics are capable of being monitored in real-time, over the whole customer base.
This system identifies new sources of substantial value.
It is clear what the right metrics should be; although some metric thresholds are more clear-cut than others i.e., there is a greater zone of tolerance in network availability than surgical success rates.
It is easy to dashboard an alert and automate a response to any failure e.g., telling customers about a late delivery.
Operational metrics define moments of pain and any blockers to use. The latter is important since there is no point in focusing on higher rates of engagement if say the customer can’t get a mobile signal.
Although companies want NPS and other survey methods to count in this system, the reality is they are only partially relevant.
CX professionals frequently lack the skills to engage in this dataset.
Any operational threshold must be determined with the customer in mind; for instance through user testing. Without the customer, there is a risk of believing that some improvement in an operational metric will improve the experience, when in fact it does nothing. e.g., speed of response.
Culturally field engineering and operations teams are not attuned to customer experience.
Operational metrics are more about loss aversion e.g., removing experiences that cause churn not creating experiences that drive loyalty or improved rates of engagement. It is therefore important not to define customer experience in terms of operational experiences alone.
An operational metric only becomes a CX metric if it impacts what customer's think-feel and do. This is frequently forgotten in the race to 'measure everything'.
Ordered system: Complicated metrics
Complicated experience metrics promise the most vendor and consulting $'s. Hence, it is the most promoted. This is where firms start to build causal models that identify predictive situations. It accounts for an estimated 35% of experiences.
A key difference between ordered metrics and complicated metrics is the extent to which expertise is required to uncover them.
Example experience metrics:
- As a network operator, you work out the causes of churn. You find that four dropped calls predict churn and use that as a basis for network improvements.
- As a retailer, you profile your detractors. You find that low scores on NPS strongly correlate to poor customer service.
- As a courier, you review your website performance and use digital behavioural data to uncover moments of pain.
- As a marketeer, you combine data on ID'd subscribers and use recommendation engines to push out more personalised offers.
Success is measured through analytical modelling. Past experience can provide best practice since mechanical problems have root causes. "I have a problem, I analyse it; I find a solution."
Customer and business value is easy to demonstrate since we define root cause effects through the application of engineering and analytics.
CX measurement mostly involves using analytical approaches. This gives comfort to leadership that customer experience can be managed, and ROI (Return on Investment) defined.
Modelling approaches are readily available and with the growth in AI becoming ever more sophisticated, predictive and prescriptive.
Automated responses can be built into the analysis.
Analytics can find order and predictability in survey data. For instance, when someone is angry about something, there tends to be a causal link.
Ordered environments are often associated with the objectives of reducing cost to serve and increasing sales.
Different experts may define different solutions.
Companies over-index on removing pain or using recommendation engines to send offers when in fact, the customer is more concerned about other aspects of the experience.
There is an overreliance on predictive response and measured efficiency. This limits the ability to develop engaging experiences. Far easier to define points of failure in say a digital experience that would cause drop-offs than risk investing in better engagement with an unknown result.
Culturally, the focus on complicated metrics makes CX professionals myopic to other types of metrics. An example of this is how qualitative, ethnographic, and trial and test programmes are subservient to quantitative complicated approaches. This leads to a surface level understanding of the customer.
There are issues of data amalgamation i.e., it is hard and expensive to get hold of the right cleansed data.
Modelling large datasets derives spurious correlations. A data engineer may then concoct a story to support their findings.
CX professionals mistakenly believe that subjective data needs to be ‘objectified’. This leads to the perception of the customer as a piece of mechanical data with contextual effects standardised and stripped out.
CX objectives such as improving loyalty, word of mouth, and relationship are less amenable to an ordered system approach.
Complicated metrics typically view the customer through a myopic lens e.g., Field engineers view the customer through the lens of the mobile signal; CX executives use NPS data that views the customer only through the lens of data gaming. Implication: AI is used to assume customer NPS scores, and tNPS assumes customers evaluate everything accurately - ignoring the fact that brand halo and other effects can overshadow individual touchpoints such as how the service rep spoke to me.
Dispositional system: Complex adaptive metrics (CAS)
This accounts for an estimated 35% of experiences - but take care, subtle shifts can lead to fundamental changes and that 35% could on occasion be much higher.
The dispositional system is new to most CX professionals, so let's start off with an explanation.
At its heart, we are dealing with rapid and unpredictable change. The key phrase there being 'unpredictable change'. As an example, try and answer the following question: how can we estimate which way a murmuration of starlings will turn next?
There is no way to predict this from past behaviour, but there may be signals within the group now that show it is heading in a certain direction.
These signals tell us how the flock is disposed. Something we can assess through close observation of present behaviour.
Another common analogy is that these systems are akin to a forest ecosystem, where the environment emerges from present and ongoing interactions. It evolves. Hence, the past fails to adequately predict the future and we need to measure where things are going, not where they have been. We need to detect the weak directional signals of change.
Now let’s think about how this applies to customer experience.
Complex adaptive events
- If you are an air steward, last week the passengers all boarded without incident. This week, however, an argument breaks out when one passenger - who sneaked on board two pieces of luggage - pulls out the luggage of another passenger to make room for their second hold all. An entirely unpredictable experience for the air staff to handle.
- If you are a manager of a busy A&E ward, consider whether you can accurately predict the reactions of the next person who walks in the room.
Explanation: the experience is changeable; it’s not fixed like a car engine. However, this is not a council of despair, If we get close enough to the experience we can see how it is disposed and can predict 'from the present' where things might be heading.
Complex adaptive feelings
- When you do your weekly shopping, consider whether your thoughts and feelings are the same as they were last week and whether you can predict what they will be next week.
- As an employee, consider how changeable day-to-day interactions with your boss and co-workers affects your mood.
Explanation: you are changeable, you are not a car engine! However, if we get close enough to understand how you are feeling, we can see how you are disposed and can predict 'from the present' where things might be heading.
Complex adaptive objectives
Now let's consider how complex adaptive systems relate to our business objectives.
Feeling and perception based metrics such as relationship, loyalty, trust, personalisation, recommendation and satisfaction are frequently the outcome of daily changeable interactions, and how we interpret them. Therefore, if CX professionals are targeted on these metrics, CAS metrics must be part of your scorecard.
To illustrate this, let's take trust.
A trusting relationship with your neighbour is achieved through small every day actions. You would lend them your garden shears; invite them over for a Xmas drink; say hello . What you would not do is offer your garden shears for a price. To quote Dr Olaf Hermans: We all see the intent behind the eyes.
Now, from a business point of view, let's think about trust.
To build trust with your brand you would also focus on those elements that show our intent towards the customer. What you would not do is worry about the ROI of each experience. That would create the wrong environment - more Ryan Air than Marks and Spencer.
Show your intent correctly and you design a more resilient experience where customers are more forgiving of mistakes and more willing to engage with the brand.
For instance, customers might be more forgiving of a Marks & Spencer contact centre rep than one from Microsoft. Customers might be more willing to queue at Primark than at WH Smith. Customers might let you know what personalisation means to them and when you are achieving it: a different approach to calculating personalisation based on the ROI of 'personalised ads and discounts'; which may be having the opposite effect.
"Companies that say customer relationship is important and then try and demonstrate the ROI of every action, badly miss the point"
What measures do we apply?
For CX professionals this is all well and good but without a measure of complex adaptive systems (CAS), shouldn't we just use NPS, on the basis that this is the best we can do?
The fact is, there is a measure of CAS. It's just that in order to pick up changeable signals it is..
Less a linear scale....
...more a flow diagram.
Source: The Cynefin Co.
For CX professionals tasked with measuring the experience and customers' feelings, this means using:
Narrative measures, vectors, and flow charts - a complex adaptive measure, sensitive to how things change.
A fixed scale over a set of pre-defined closed questions - a complicated measure that assumes experience and customers' feelings are predictable and unresponsive to change.
At this point, CX professionals may say that they do pick up weak signals of change by using narrative, or by monitoring digital data. However, these approaches still assume customer response and experience is entirely complicated. When what we need, is to add in a complex adaptive measure.
In terms of customer feelings, this means a measure more in tune with cognitive science than computer science.
Here is a summary of what we can do to our current measurement approaches to ensure they are more sensitive to the complex unpredictable environment of human thoughts and feelings and picking up changeable, unpredictable events.
1. Ensure your analytical models flex to human interpretation
Flying home from a business trip I filled out a 15 item survey. None of the questions were relevant to my experience and there was no space for me to write down what I truly felt. While it may be attractive for the firm to fix things like this, they would miss out on any weak signals of change. Hopefully, you can also see that this inability to flex holds major implications for how we build segmentation models and define personas.
2. Add a layer of human interpretation into your sentiment analytics
Sentiment analytics fix an answer based on past input or flex in ways that assume human cognition is similar to a computer programme.
This is not the way to measure the complexity of thoughts and feelings. Instead, in CAS, any scoring of narrative must come from the customer - a process called signification.
This is the only way to assess how humans (a complex adaptive system) would interpret their own changeable narrative. And in any case, human interpretation is frequently not based on the words written on the page so we need something that goes beyond a literal approach.
3. Use description not just evaluation
Customers do not evaluate every moment:“No customer walked out of a store and said that was a great 8.5 out of 10 experience”. What is meaningful to the customer is not the outcome of some best-fit approach: there is no mathematical equation for trust in the mind of the customer.
Instead, customers first-fit. Identifying what is resonant from their memory and expressing this in a conceptually blended response such as "I love the Apple brand", "You guys really helped me out in a crisis", etc.
In other words, customers often respond in a more abstract and descriptive way; telling us how they are disposed.
If we are interested in getting good data to measure the complexity of thoughts and feelings, we need to enable this natural descriptive voice and not assume a ‘complicated’ view of customer response as always in evaluative mode. In this way we better pick up our leading indicators.
4. Focus on disposition
Complicated systems have predictable root-causes. Complex adaptive systems have dispositionality.
This means we cannot know beforehand where things are heading: remember the murmuration of starlings, we need to be sensitive to the present, to see where things are heading.
Where does dispositionality arise in customer data?
Dispositionality frequently shows up in the way customers give hard to pin down abstract responses to survey questions. It gives us dispositional rather then exactly predictive data.
Take customer feelings. I might say "I don't like your customer service" or "you helped me out in a crisis". This type of response may not provide directly predictive information, but it is still a leading indicator of say the trusted environment you create. It also provides the basis for trialing and testing new solutions.
"Service quality metrics are much like this. There may be no direct predictive ROI of an interaction - 'when you helped me out' - but that doesn't mean you should close customer service or make it difficult to find."
Dispositionality also surfaces new material the customer is becoming aware of and being influenced by.
For instance, a few customers at a beach resort may raise a negative issue, such as seagulls stealing chips. For them it is a major nuisance, although other customers have yet to see this as a problem. If we are alert enough, however, we can nip this in the bud before it grows to be a problem. In other words, we create an early warning system, alerting us to threats and also to any opportunities as they emerge.
Note that complex adaptive systems are often characterised by sudden shifts, i.e. when a tipping point is reached. Being sensitive to how things are disposed enables companies to be more resilient to change.
5. Probe - sense- respond
Amending our survey processes to capture dispositional and unpredictable change as it arises means we are sensitive to how things are disposed now. This is important, since we cannot use the past to predict the future.
This also means we need to follow a probe-sense-respond approach to CX.
- Probe – set-up the CAS survey and/or a series of trials and tests of the system (the experience) to see what is happening now.
- Sense – become alert to the results.
- Respond – nudge the customer in the way we would like.
6. Add a new CX metric
Cognitive science tells us that the best measure of the complex space in customer experience is to measure customer narratives through sensemaking. That in essence: '"Our key CX metric, becomes more stories like this, fewer stories like that."
The implications of not including CAS measures
Failing to include CAS metrics means you over-index on ordered system approaches. This means you fail to take account of other influences that might overwhelm the predictions of an ordered metric and become myopic to the weak signals of directional change.
Here is an example to show you the dangers of such an approach:
Measuring the employee experience
"Sure, they had their values stuck on a wall, but how did I feel as an employee? That was wholly negative. The undermining of my work by my boss; the lack of support from the sales team; the bully over from the US who advertised my job and constantly undermined me behind my back. It was these interactions that counted, which is why I disconnected from my work and had to leave."
How we feel about where we work is not the result of a fixed set of values. It is dependent upon our day-to-day interactions with others. Interactions that are changeable and unpredictable. Indeed, it would be utterly incorrect to apply a set of fixed values to measure such experiences.
If, for instance, you had measured the use of digital platforms, you might have come away with the impression that this employee was disengaged and not being successful at their job: and it was all their fault.
If you had measured their eNPS score you might have come away with the view that there was no problem since they are hardly going to tell you the truth over a formalised scoring system.
In this example, the best measurement approach is to use complexity science. This enables employers to become aware of the weak signals of change, i.e., how the quality of employee interactions is leading to failure and distress.
"Context is king. It is a fundamental component of your customer experience efforts. Sure it costs money to understand context, but that's still cheaper than depending on poor quality and inaccurate data".
A significant challenge comes from the fact that CAS metrics represent a paradigm shift in our understanding of customer experience.
In complex adaptive systems, value is derived from our being close to and understanding the emerging present. This poses a direct challenge to existing models of CX change with its industrial metrics, standardised linear executive dashboards and posters on a wall. Instead, it asks us to consider data that is big, rich, and deep i.e., the ongoing narrative, qualitative and ethnographic input as well as co-created ideas to trial.
And trial we must. Customer experience here is more about being design-led and design-informed than being dependent on the frequently false machinations of a data model that tries to shoe-horn-in predictabiity to satisfy a C-level decision-maker.
It also means that we should look to gain clues on the customer experience from other ecosystem partners who can identify change and develop innovative ideas such as employees on the front line.
Example: How CAS metrics gave a different interpretation of NPS than traditional measures
When Ericsson used complex adaptive techniques to measure how customers felt about their mobile signal, they used narrative CAS measures. Interestingly and at the same time, they did a comparator 'complicated' study in the same Delhi circle cell with the same sample size (1,500). The results from the complex adaptive survey showed that even though the focus was on the signal, customers' main concern was on customer service; most of the time the signal worked great. Furthermore, they found that two-thirds of detractors would not never ever recommend the brand; while one-third of promoters would never ever recommend the brand.
"By using complexity techniques the brand found that: promoters don’t promote, detractors don’t detract, and your most important audience were the passives"
By comparison, the traditional NPS survey was gamed with closed questions to ensure that customers would say it's all about the mobile signal. So of course, buy our platform.
You also have to ask yourself, which dataset would you trust? One that enables the natural voice of the customer, or one that fixes it for customers to answer the questions you want.
Changeable - Chaos
I will not linger too long on this, partly because any experiences in this domain should be easy to identify. For instance, a new product has entered the market and you need to react; or a disaster recovery situation has arisen; or there has been a Covid outbreak that has fundamentally changed the customer experience.
The approach in this domain is essentially ‘just do it’. Measurement is less applicable over the immediacy of action.
This accounts for an estimated 5% of experiences.
Why should we be bothered with CX metrics?
Customer experience metrics come from the customer. They are concerned with what customers think-feel and do.
We want to measure customer experience because we want to create more value for the customer and hence more value for the business through, for instance, brand differentiation, reduced churn, increased engagement and, yes, efficiency (as viewed by the customer).
How are CX improvements measured?
Since customer experience improvements are measured by the customer, survey and customer research techniques prevail. Otherwise, we might assume we have improved the ‘experience the customer has’ when we might have done nothing at all.
For instance, it would be correct to amend our experience based on an analysis of how customers share Instagram pictures or speak on the phone/ via live chat. For some companies, It may also be correct to prioritise our actions based on a cost first approach. For instance, seeking to influence the most costly contact centre journeys as defined by customer intent code.
But, does that mean the experience has improved?
At some point, we need to either validate actions with the customer in clear/complicated domains or surface new points of value from the customer in complex adaptive domains.
Hence, for all those involved in machine learning and AI, you must engage human insight (e.g., focus group, UX research, ethnography, and quali-quant) in the training of your algorithms and other operational process. No human insight, no CX.
How do we extract value from customer experience?
Much of the value of CX is derived from preventing negative effects from showing up on our customer scores.
For instance, preventing a mobile signal from going down; ensuring a smooth UX; or focusing on transactional scores at every touchpoint, reduces the propensity for low NPS/ CSAT or CES results or more negative sentiment. These approaches tend to hold direct, root-cause impacts. Simply, put 'if they did happen' customers would notice and tell us.
Contrast this to assumed direct effects that are not in fact customer experience related i.e., 'if they did happen, customers would not notice them and there would be no change in CX score'. An example of this could be a 10-second increase in picking up the phone.
In our CX metric stack we must also include dispositional effects. So, the things customers say about their interactions with you that are not predictable but are nonetheless important as indicators of things such as trust. These are the background effects upon which root causes are derived. For instance, we trust the Apple brand but not Microsoft.
We look after these vector and leading indicator effects by managing and being sensitive to ongoing interactions and nudging customer experience in the right direction.
What are the methods currently deployed?
NPS is used as CX professionals are unable to find any better method to measure customer experience. In effect, NPS works well enough as a random dice roll.
What are the problems with current method?
However traditional CX metrics such as these fail to measure accurately because they only take account of ordered systems; and even then:
Undervalues how much of an ordered system can be measured through operational metrics.
Leaves value on the table when dealing with complex adaptive environments.
This value deficit can be seen in data analytics, i.e. the poor predictive power of attitudinal metrics such as NPS; the limitations of assuming customer data is always the same as mechanical; and the lack of quali-quant insight which answers the question 'why', 'what else' are we missing and 'what is' the genuine voice of the customer?
Existing operational metrics are adequate, although currently underused.
Companies should include these metrics in their CX scorecard.
CX operational metrics should use a hold-out sample of customers who can define relevant thresholds and inform on improvements.
Complicated system metrics
Existing attitudinal measures such as NPS and analytical modelling frameworks work well and can be further enhanced with AI.
Complex adaptive system metrics
This is heavily underused. However, these can add value to existing metrics by showing how the customer is disposed to your brand. They enrich NPS and other metrics by scanning the horizon for change
You can also add in more nuanced approaches such as ethnography and qualitative research to get a better handle on change. Scaling is possible with quali-quant approaches such as mass narrative capture and community platforms
Remember though that while we can measure CAS dispositionally, you can't use predictive analytics, you have to try things and see what happens. This means more emphasis needs to be put on a design-led approach to customer experience.
A personal opinion
Personally, I don't believe that companies should depend on a single scaled customer metric. Ideally we should be aware of what stories to amplify (more stories like this) and dampen (fewer stories like that) wherever and whenever they arise or could arise.
We should then show the C-level what we have done to improve customer value. For instance, with Avios CX success was defined by 23 new initiatives that delivered a mix of ROI and soft metric improvements. Likewise, with Motor Insurers' Bureau the demonstration of success came from what we did: the production of 100 CX-based user stories.
In both examples, if we had just focused on NPS rather than looking for customer experience improvements across the journey, we would have left value on the table. This equally applies to AI/ML algorithms and operational metrics not tempered by human insight.
What does this mean? It means using Cynefin to decide which form of metric to apply against which experience.
And the implications of that are, we sometimes need to probe-sense-respond. The ROI of customer experience being derived from what we do - the outcome of our trials - not just predictions from the past.
An analytical approach to CX focused on root-cause ROI is not wrong, it just needs to be balanced with a design-led approach to CX that defines ROI from trials and also understands that sometimes it's also about the environment we create.
I agree... sometimes, CX is about reducing the friction revealed by AI and text analytics. This lowers the probability of dissatisfaction, disloyalty , and low NPS. Although I would still like a hold-out sample to see if this is true! You are an efficiency brand and service quality prevails.
But... I also agree that CX is about understanding from the customer: 'why', 'what else is missing' and 'what is the natural voice'. You are an experience brand and experience quality prevails.
Customer experience is a 'big piece of pie', to get a handle on it all you need to do is ask, what type of experience are we seeking to measure and go from there.
Acknowledgement: Tom Kerwin for his help and advice; Dave Snowden and the Cognitive Edge team especially Nathalie.
This article adapted from a piece that originally appeared on the All About Experience blog.