Share this content

Customer feedback: How collection methods skew results

28th Jun 2018
Share this content

The method used to engage with customers for their feedback can have a significant impact on the nature of the feedback you collect and, consequently, the metrics you report.

Engaging with customers and collecting their feedback is a competitive edge available to any enterprise willing to invest in data management, process optimisation, and creative design. Insights from customers collected through surveys and other feedback mechanisms are the backbone of Customer Experience (CX) management. 

There is often a rigorous debate about what metrics you should collect with your Voice of the Customer (VoC) programme, such as Net Promoter Score®, Customer Satisfaction and Effort score, to name a few. 

However, something that is often overlooked is the method used to engage with customers for their feedback, despite this having a significant impact on the nature of the feedback you collect and, consequently, the scores you report to your organisation and key stakeholders.  

In this post, I provide an example of how different collection methods can significantly impact your metrics, and how you can leverage different methods to achieve the two most important, but unique deliverables of any VoC programme - How are we doing, and how do we improve? 

A tale of two engagement methods 

Let’s look at a hypothetical example based on our experiences with our clients.  

A website that provides services for current product owners uses the two engagement methods below to collect feedback on their website, during the same time period:  

Voice of the Customer 1

Visitors are engaged randomly via only one of these methods during their visit and are asked to rate their experience on a scale from 0 to 10. However, these surveys show a dramatic difference in the ratings provided:    

Voice of Customer 2

So, this data begs the question: is this company offering a good website experience as is suggested by Sample A, or is it in need of serious intervention like the results in Sample B? 

The impact of different engagement methods on VoC metrics 

First, it’s important to remember these two key goals of a useful VoC program: 

  1. Better understand how visitors feel about their overall experience, at key points of interaction with your brand.  
  2. Find points of improvement you can address through targeted initiatives. 

To achieve the first goal, we need to determine which of these samples provides the most representative view of typical visits to the site. Representativeness can be a highly-debated topic, given the low response rates most VoC surveys collect (the non-response bias question). 

One way to determine potential bias is to compare data from a secondary source where you have a view of the population – clickstream/web analytics, POS or CRM, for example. You can compare your samples on a few data points to the distribution in your secondary data and establish, to your comfort level, any bias in your samples. Note that bias is not bad, but it must be understood so the data can be interpreted and leveraged appropriately. 

In our example above we can compare the samples’ clickstream behavior to the clickstream for the website where we have all the visits. Below is an example comparing the distribution of time on site for both samples: 

Voice of the Customer 3

In this example, the respondents from Sample A have a behavior very similar to the Non-Participants group, strongly suggesting that Sample A is much more representative of typical visits to the website. As such, we seem to have a site that usually delivers a good experience. Using this metric for analysis will help identify trends, drivers, key segment differences and other analysis to develop your VoC strategy. 

So, what of Sample B (in-session feedback)? These respondents average much longer visit times. This behaviour, combined with the negative skew of the data, suggests these visitors likely reached a moment of friction in their experience, and eventually chose to express their frustration.   

The feedback provided at a moment of frustration often elicits the valuable ingredients from which you can build tactical and strategic initiatives to improve customer experience. As such, the ‘in-session feedback’ method in this case can be an ideal solution to address our second goal of a successful VoC programme - to provide actionable insight that improves customer experience.  

Sample B demonstrates that despite having a strong site, there are thousands of poor experiences to address. There is nothing wrong with a goal of no bad experiences.   

The benefit of using multiple engagement methods 

Once an enterprise has settled on what metric(s) to track, the focus quickly shifts to asking what can be done to improve the experience. Management and other stakeholders need to know what initiatives, tactics or programs they can launch to improve sentiment and help them achieve their business goals.  

As we just saw, the representative feedback from Sample A is captured once the visitor has completed their session, providing the company with an idea of how all their customers currently view the overall experience. On the other hand, the negatively-skewed feedback from our sample B is captured at a moment of frustration along the customer journey and provides the ingredients from which actionable tactics can be developed and executed. 

Implementing just one of these methods already provides a great source of insights if you are looking to better understand your CX. On the other hand, implementing a combination of both engagement methods allows you to fully cover a touchpoint to improve the customer experience: 

voice of the customer table

This two-pronged approach gives you a good recording of key moments of irritation that need to be improved, while it also provides proper orientation of the overall experience you are leaving with your customers. 

How you listen matters 

There is no such thing as ‘wrong’ customer feedback. However, different engagement methods will yield different results.  

Part of the role of a CX professional is to present results to key stakeholders such that they tell the right story and provide the right insights to co-develop the right customer-centric programs. At its most basic, a VoC programme must deliver on the two dimensions mentioned at the beginning of this post: 

  1. How are we doing? 
  2. What can we do? 

Each method of collecting customer feedback informs CX professionals differently on these two key dimensions, and each has its share of benefits and biases. When it comes to VoC, it is not just about what questions you ask; it’s also about when, where and how you ask these questions.  

Each of these ingredients affects the nature of the data you collect, and the role it plays in delivering value to each dimension. After all, the value of a solid VoC programme comes from the insights that lead to the creation of initiatives to improve customers’ experiences along their journey, across all touchpoints.

Replies (3)

Please login or register to join the discussion.

By Alyona Medelyan
13th Jul 2018 23:45

The score I give depends on my particular mood at the time of asking. I've given a range of scores to Confluence, our internal knowledge base, when being asked in-session. The focus shouldn't be on collecting scores but the actual actual customer feedback in their own words. What are the important improvements Confluence can do to make my experience better and why? Answers to such questions will provide more insight than scores in my opinion.

Alyona Medelyan
CEO at Thematic (

Thanks (0)
By Madeline Turner
09th Dec 2019 21:01

I have to agree with Alyona on this. The verbatim feedback behind the score adds the context necessary to address frustrations and breakdowns across the customer journey. Take NPS scores as an example. The score is a good indication of performance, but it is the verbatim customer feedback (and the trends across this feedback) that provides direction for how companies should respond.

More on measuring the verbatim feedback here:

Would love your thoughts.

Thanks (1)
By tmahmood
24th Jun 2020 13:12

At Syntellio we have played around a lot with when we ask for feedback for our clients from their customers. Timing of when you seek feedback is clearly key to ensuring you engage the customer at the optimal time after they have received the product and had time to use it, but not too late that thinking about the purchase is no longer of interest.

However, agree with Alyona and Madeline that qualitative insight from customers is what is most useful to allow a business to make effective decisions going forwards. Even with this, we have refrained from offering 'pointers' of areas we want customers to offer feedback on (such as delivery, product quality etc.). Instead an open text box allows customers to say what they want and how. We are monitoring this, however, to see if we are able to provide the best insights for our clients this way and will keep under review.

We've written about the process of collecting feedback and any thoughts would be welcome:


Thanks (0)