
John Sollars looks at the most common pitfalls that can befall businesses when conducting a customer survey - and how to either avoid or limit their impact.
I recently wrote an article on the pitfalls of customer testing and surveys. I felt a follow-up was necessary as surveys really are powerful tools at getting feedback, but they need to be undertaken correctly. Here’s my take on the top pitfalls of conducting a customer survey, and how to either avoid or limit their impact.
Demographic bias
- Does no one younger than 56 buy printer ink at Stinkyink?
- Do people younger than 56 not read or register for our email (which pushed the survey)?
- Are people over 56 the only ones willing to complete our survey?
Sample size
The results of a survey with a larger sample size are also more accurate and reliable.Think of women’s beauty products advertised on TV: “80% of women agree it’s the best”, with the small print of “25 people were sampled”. Consumers are getting wise to these ploys, so avoid that pitfall straight away.
Leading questions
- Do you agree that our service is very good?
- How would you rate our service?
Also beware poorly chosen questions, where no rational consumer would say no, for example “Do you wish our prices were lower?”
‘Response influence’ is another form of leading question, and is a common issue with polls. If you show a respondent the existing answers, it has a major impact on their choice. If they see 97% of people have had a good experience, they’re much more likely to click the same option (unless they had a particularly strong negative feeling in the first place).
Does the customer even know?
My previous article discussed how Walmart conducted a survey which reached the conclusion that “the majority of customers want a less cluttered store”. That conclusion cost the company $1.85 billion in revenue.
Customers may want less clutter, but they also want choice. Removing Walmart’s famous wide selection, with stacked mid-aisle pallets and end-rail (end cap) displays, gave customers exactly what they wanted. But it also removed the obvious product variety and ease of finding bargains.
This highlights the need to always additionally test any conclusion you make from a survey. Whether it’s a website split-test, or new service tested on a sample of customers, you must take every precaution possible to make sure what the customer wants is what’s best.
Survey rewards
The nothing-response. A customer click-throughs without answering a question, solely for the reward. Though these are easy to filter out of results analysis, it is essentially a waste of time, and costly if you’re paying for the survey per response. This can be counteracted by making an answer compulsory.
Required answers unfortunately lead to speed-responses. A customer can simply click-through a multiple-choice survey, selecting any option without reading the question. This is especially common when the survey is simply a choice of Yes/Nos, with a reward at the end.
This is almost impossible to counter. The best way of highlighting these responses is question duplication. If you ask for an opinion on delivery twice, for example, and you get grossly different answers, it is likely you have a random clicker and the results can be filtered out.
The final issue of rewards is the psychological effect. It is difficult to criticise a company who’s about to reward you. Counteract this by, firstly, reinforcing the need for honesty. Tell them why they must be as honest as possible.
Additionally, you can try and sculpt a path in the questions to remind the customer how they felt when the issue at hand arose. For example, start by asking if they’ve had a faulty item. If Yes, ask for more details, such as when, where etc. This rebuilds the scenario in the customer’s head and should rekindle the raw feelings/emotions.
Standardised responses
Q1 - Please review the service you received from [company].
{text box of 400 characters}
Q2 - Please review [company’s] service in the following sections:
Delivery - Liked / Disliked / Not Sure
etc
Question 1 will provide some amazing feedback, but 10,000 of those responses will require significant text analysis to allow actionable conclusions. Question 2 is simple to analyse, but limits what customers can comment, stifling results.
The best fix for this is to run a small test survey, say 10-20 participants. Ask for their feedback, you will quickly find the problems. Don’t forget that, ultimately, it’s worth having a non-standardised question and spending the time to analyse if the feedback is worthwhile.
Conclusion
So there you have it! It’s a lot to take in, I know, but if you don’t address these select areas you may as well be surveying yourself 1,000 times; the results will be that meaningless.
If you’ve experienced any of these pitfalls, or have any other survey concerns to add, then please do comment below and help others avoid the same mistakes.
John Sollars is MD of printer ink supplier Stinkyink.com.
Replies (0)
Please login or register to join the discussion.
There are currently no replies, be the first to post a reply.