MyCustomer.com

Finding out what your customer wants: How to do an accurate survey

by
5th Dec 2011

John Sollars looks at the most common pitfalls that can befall businesses when conducting a customer survey - and how to either avoid or limit their impact.

I recently wrote an article on the pitfalls of customer testing and surveys. I felt a follow-up was necessary as surveys really are powerful tools at getting feedback, but they need to be undertaken correctly. Here’s my take on the top pitfalls of conducting a customer survey, and how to either avoid or limit their impact.

Demographic bias

Unless your product/service is tightly targeted (e.g. bus tours for 60+), you want as broad a respondent demographic as possible. For example, a recent survey my company undertook had 53% of respondents labelled as older than 56. So what is this due to?
  1. Does no one younger than 56 buy printer ink at Stinkyink?
  2. Do people younger than 56 not read or register for our email (which pushed the survey)?
  3. Are people over 56 the only ones willing to complete our survey?
You need to be aware of imbalances like this, and try to allow for them when you analyse your results. So look at the conclusions you draw; are they consistent with the demographics for 20-30 year olds, male versus female etc.

Sample size

The larger a sample size you employ, the more statistically significant your results will be (assuming it’s an even sample). Though it can often be much more expensive to get a larger sample you’ll find conclusions are easier to analyse with a larger data set. Additionally, you can dig deeper into the figures, segmenting by demographics and response choices, the more responses you have.

The results of a survey with a larger sample size are also more accurate and reliable.Think of women’s beauty products advertised on TV: “80% of women agree it’s the best”, with the small print of “25 people were sampled”. Consumers are getting wise to these ploys, so avoid that pitfall straight away.

Leading questions

You’d be amazed how the responses change depending on your wording. Imagine the following two questions:
 
  • Do you agree that our service is very good?
  • How would you rate our service?
Using the term “agree” suggests that other respondents believe their service is very good, and that the respondent should also agree. It is semantic subtlety like this that can either inadvertently skew the results of a survey, or allow results to be manipulated by savvy researchers.

Also beware poorly chosen questions, where no rational consumer would say no, for example “Do you wish our prices were lower?”

‘Response influence’ is another form of leading question, and is a common issue with polls. If you show a respondent the existing answers, it has a major impact on their choice. If they see 97% of people have had a good experience, they’re much more likely to click the same option (unless they had a particularly strong negative feeling in the first place).

Does the customer even know?

This one is simple, don’t ask anything a customer cannot answer or understand. If possible, simplify your questions (or explain them) enough so that anyone could respond. If it really needs to be technical, then make sure you only sample a set of people who will understand.

My previous article discussed how Walmart conducted a survey which reached the conclusion that “the majority of customers want a less cluttered store”. That conclusion cost the company $1.85 billion in revenue.

Customers may want less clutter, but they also want choice. Removing Walmart’s famous wide selection, with stacked mid-aisle pallets and end-rail (end cap) displays, gave customers exactly what they wanted. But it also removed the obvious product variety and ease of finding bargains.

This highlights the need to always additionally test any conclusion you make from a survey. Whether it’s a website split-test, or new service tested on a sample of customers, you must take every precaution possible to make sure what the customer wants is what’s best.

Survey rewards

You often need an incentive for respondents to participate in a survey, such as entry into a draw. This opens up a whole host of behavioural issues that are nigh-on unavoidable.

The nothing-response. A customer click-throughs without answering a question, solely for the reward. Though these are easy to filter out of results analysis, it is essentially a waste of time, and costly if you’re paying for the survey per response. This can be counteracted by making an answer compulsory.

Required answers unfortunately lead to speed-responses. A customer can simply click-through a multiple-choice survey, selecting any option without reading the question. This is especially common when the survey is simply a choice of Yes/Nos, with a reward at the end.

This is almost impossible to counter. The best way of highlighting these responses is question duplication. If you ask for an opinion on delivery twice, for example, and you get grossly different answers, it is likely you have a random clicker and the results can be filtered out.

The final issue of rewards is the psychological effect. It is difficult to criticise a company who’s about to reward you. Counteract this by, firstly, reinforcing the need for honesty. Tell them why they must be as honest as possible.

Additionally, you can try and sculpt a path in the questions to remind the customer how they felt when the issue at hand arose. For example, start by asking if they’ve had a faulty item. If Yes, ask for more details, such as when, where etc. This rebuilds the scenario in the customer’s head and should rekindle the raw feelings/emotions.

Standardised responses

Commonly seen in 1-5 rating matrices or Yes/No questions, surveys will often standardise a participant’s response to make analysis easier. This limits the potential feedback and conclusions you can make, limiting your survey’s findings. For example:

Q1 - Please review the service you received from [company].
{text box of 400 characters}

Q2 - Please review [company’s] service in the following sections:
Delivery - Liked / Disliked / Not Sure
etc

Question 1 will provide some amazing feedback, but 10,000 of those responses will require significant text analysis to allow actionable conclusions. Question 2 is simple to analyse, but limits what customers can comment, stifling results.

The best fix for this is to run a small test survey, say 10-20 participants. Ask for their feedback, you will quickly find the problems. Don’t forget that, ultimately, it’s worth having a non-standardised question and spending the time to analyse if the feedback is worthwhile.

Conclusion

So there you have it! It’s a lot to take in, I know, but if you don’t address these select areas you may as well be surveying yourself 1,000 times; the results will be that meaningless.

If you’ve experienced any of these pitfalls, or have any other survey concerns to add, then please do comment below and help others avoid the same mistakes.

John Sollars is MD of printer ink supplier Stinkyink.com.

Tags:

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.