Customer feedback: How collection methods skew results
The method used to engage with customers for their feedback can have a significant impact on the nature of the feedback you collect and, consequently, the metrics you report.
Engaging with customers and collecting their feedback is a competitive edge available to any enterprise willing to invest in data management, process optimisation, and creative design. Insights from customers collected through surveys and other feedback mechanisms are the backbone of Customer Experience (CX) management.
There is often a rigorous debate about what metrics you should collect with your Voice of the Customer (VoC) programme, such as Net Promoter Score®, Customer Satisfaction and Effort score, to name a few.
However, something that is often overlooked is the method used to engage with customers for their feedback, despite this having a significant impact on the nature of the feedback you collect and, consequently, the scores you report to your organisation and key stakeholders.
In this post, I provide an example of how different collection methods can significantly impact your metrics, and how you can leverage different methods to achieve the two most important, but unique deliverables of any VoC programme - How are we doing, and how do we improve?
A tale of two engagement methods
Let’s look at a hypothetical example based on our experiences with our clients.
A website that provides services for current product owners uses the two engagement methods below to collect feedback on their website, during the same time period:
Visitors are engaged randomly via only one of these methods during their visit and are asked to rate their experience on a scale from 0 to 10. However, these surveys show a dramatic difference in the ratings provided:
So, this data begs the question: is this company offering a good website experience as is suggested by Sample A, or is it in need of serious intervention like the results in Sample B?
The impact of different engagement methods on VoC metrics
First, it’s important to remember these two key goals of a useful VoC program:
- Better understand how visitors feel about their overall experience, at key points of interaction with your brand.
- Find points of improvement you can address through targeted initiatives.
To achieve the first goal, we need to determine which of these samples provides the most representative view of typical visits to the site. Representativeness can be a highly-debated topic, given the low response rates most VoC surveys collect (the non-response bias question).
One way to determine potential bias is to compare data from a secondary source where you have a view of the population – clickstream/web analytics, POS or CRM, for example. You can compare your samples on a few data points to the distribution in your secondary data and establish, to your comfort level, any bias in your samples. Note that bias is not bad, but it must be understood so the data can be interpreted and leveraged appropriately.
In our example above we can compare the samples’ clickstream behavior to the clickstream for the website where we have all the visits. Below is an example comparing the distribution of time on site for both samples:
In this example, the respondents from Sample A have a behavior very similar to the Non-Participants group, strongly suggesting that Sample A is much more representative of typical visits to the website. As such, we seem to have a site that usually delivers a good experience. Using this metric for analysis will help identify trends, drivers, key segment differences and other analysis to develop your VoC strategy.
So, what of Sample B (in-session feedback)? These respondents average much longer visit times. This behaviour, combined with the negative skew of the data, suggests these visitors likely reached a moment of friction in their experience, and eventually chose to express their frustration.
The feedback provided at a moment of frustration often elicits the valuable ingredients from which you can build tactical and strategic initiatives to improve customer experience. As such, the ‘in-session feedback’ method in this case can be an ideal solution to address our second goal of a successful VoC programme - to provide actionable insight that improves customer experience.
Sample B demonstrates that despite having a strong site, there are thousands of poor experiences to address. There is nothing wrong with a goal of no bad experiences.
The benefit of using multiple engagement methods
Once an enterprise has settled on what metric(s) to track, the focus quickly shifts to asking what can be done to improve the experience. Management and other stakeholders need to know what initiatives, tactics or programs they can launch to improve sentiment and help them achieve their business goals.
As we just saw, the representative feedback from Sample A is captured once the visitor has completed their session, providing the company with an idea of how all their customers currently view the overall experience. On the other hand, the negatively-skewed feedback from our sample B is captured at a moment of frustration along the customer journey and provides the ingredients from which actionable tactics can be developed and executed.
Implementing just one of these methods already provides a great source of insights if you are looking to better understand your CX. On the other hand, implementing a combination of both engagement methods allows you to fully cover a touchpoint to improve the customer experience:
This two-pronged approach gives you a good recording of key moments of irritation that need to be improved, while it also provides proper orientation of the overall experience you are leaving with your customers.
How you listen matters
There is no such thing as ‘wrong’ customer feedback. However, different engagement methods will yield different results.
Part of the role of a CX professional is to present results to key stakeholders such that they tell the right story and provide the right insights to co-develop the right customer-centric programs. At its most basic, a VoC programme must deliver on the two dimensions mentioned at the beginning of this post:
- How are we doing?
- What can we do?
Each method of collecting customer feedback informs CX professionals differently on these two key dimensions, and each has its share of benefits and biases. When it comes to VoC, it is not just about what questions you ask; it’s also about when, where and how you ask these questions.
Each of these ingredients affects the nature of the data you collect, and the role it plays in delivering value to each dimension. After all, the value of a solid VoC programme comes from the insights that lead to the creation of initiatives to improve customers’ experiences along their journey, across all touchpoints.
You might also be interested in
Lane Cochrane has more than 20 years of management and business development experience in the market research and analytics industry. As Chief Innovation Officer at iperceptions, Lane is responsible for developing customer analytics offerings that maximize the value of customer research within the evolving customer experience management ...