Associate Partner
Optima Partners

Six tips for customer experience-enhancing customer research
by
21st Jun 2017
Customer research is an interaction and its impact on the customer experience needs to be taken into account when it is being designed. As such, Jack Springman provides six simple rules for customer experience-enhancing customer research.
Earlier this month I received an email with the title ‘Give your opinion about [company name omitted]’ - an invitation to participate in the company’s annual survey of its customers.
Mostly I ignore such opportunities, tending only to ‘participate’ (or give my time as I prefer to call it) if delighted or exasperated - either to say thank you and provide recognition of the high service levels received to encourage their continuation, or to vent my anger. (As a result, I am suspicious of the accuracy of such research, believing self-selection introduces bias as only those at the extremes of the satisfaction spectrum respond, the indifferent majority remaining silent.)
This particular company’s products – both hardware and consumable – are elegant, simple, reliable and convenient to use. In benefit terms, they provide me with a treat at home for a very reasonable price. And the service provided has been superb - re-orders of the consumable element being delivered quickly and without fuss. So despite the rather forbidding estimated completion time of 35 minutes, I embarked upon the on-line survey.
It all started rather well. The second question asked me what the company could do to improve the quality of its service, providing a large, free-text box for my answer. Qualitative insights are frequently ignored in such surveys in favour of what can be displayed graphically. But quantitative data always needs a qualitative explanation for it to be meaningful (to ‘complete the feedback loop’ as analysts like to say).
Scores will tell whether you are doing well or badly, but not why – what specifically you should continue and build on; what needs to be stopped or fixed. I mentioned a couple of things, the second of which was that we were a little haphazard in our re-ordering, often failing to do so before running out, so would appreciate receiving prompts based on analysis of our past ordering patterns. (Essentially I was asking them to market to me more aggressively.)
But my pleasure was short-lived. Some 15 minutes later, with the on-screen indicator telling me that 36% of the survey was complete, I exited, my willingness to recommend score having plummeted from 9 at the beginning to 6 as a result of the questions asked. Few of these appeared to be about the service provided, the majority being related to intangible attributes that I neither recognised nor cared about. It was clear that the objective was to obtain some brand score and a basis for future communications campaigns rather than improve the experience provided. And in so doing violated the implicit quid pro quo in customer research where participants give their time for free – help us to help you.
Frustration at the type of questioning was compounded by the style - the use of a six point scale – ‘strongly agree’ to ‘strongly disagree’ – which provided no opportunity for ‘don’t know’ or ‘don’t care’. I was forced into agreeing or disagreeing even when I had never considered what I was being asked or felt it relevant to the degree required to have an opinion. (By definition, my answers were an exaggeration of what I felt, piling inaccuracy on top of the selection bias described above.) Even more irritating, there were no further opportunities for qualitative input to complete the feedback loop where I did have a strong opinion.
On top of this was the sheer number of questions being asked. There is a rule of thumb for metrics which applies equally to questions in research surveys – if it is not actionable, don’t measure it (or ask it). If a high or low score will not result in different actions being pursued, computing the metric or asking the question is a waste of time. And there is no way that multitudes of questions are actionable, especially if no qualitative information is collected to shape the actions required. The focus, style and sheer volume of questions made it clear that the survey had been designed with zero empathy for those who would be completing it.
Brand-focused surveys and perception damage
This is not to argue that intangible benefits associated with brands are unimportant, or that the impact of campaigns to establish these benefits in the minds of consumers should not be measured. But such questions – indeed any questions where the answer benefits the business but not the customer – have a cost. That cost can either be recognised up-front, in the form of remuneration paid to focus group attendees, or amortised over the longer term through damage to customer satisfaction.
The irony of brand-focused surveys is that they risk damaging the perception they were seeking to measure. The three point drop in my willingness to recommend reflected a seismic shift in my perception of the company. Where previously I had perceived it to be highly customer-centric, I now saw it as brand-centric and selfish. Rather than feeling a valued customer, I was left with a sense that I was just a tiny piece of a mirror which the company was holding up for self-admiration.
Now, I am a bit of a sceptic when it comes to branding and brand strategy, particularly when it is elevated above customer experience design. (My theory is that a focus on brand encourages egocentricity, its focus on an internal construct rather than an external constituency leading to an inside-out rather than outside-in perspective. That it also exaggerates the importance of intangible over tangible benefits as these are what marketing can control through communications campaigns. And this leads to excessive focus on awareness and recognition – often at the cost of setting expectations so high that failure to meet them is inevitable - rather than the more substantive work of developing the necessary capabilities to deliver the right functional and emotional benefits at each interaction point across the customer life-cycle that add up to an excellent experience.) So it is unlikely that anyone else would have downgraded their score in such draconian fashion.
But the general point still stands – customer research is an interaction and its impact on the customer experience needs to be taken into account when it is being designed. As a starting point, it should contribute to a positive experience. Most people are flattered when interest is shown in them and enjoy talking about themselves (innate egocentricity again). And there is a big difference between a conversation along the lines of ‘tell me about yourself and what you like’ and one which has a premise of ‘tell me how much you like me’.
So with the above in mind, let me suggest the following as some simple rules for customer experience-enhancing customer research:
- Consider the expectations of those being researched. An experience is always judged relative to expectations. (Four-star service feels great if only a two-star level was expected, but lousy if five-star or better was anticipated.) Customers will expect some benefit for giving up their time. If you wish to ask questions for the benefit of your company rather than your customers, it is far better to recognise this and pay up front for customers’ time.
- If not paying customers for their time, be sparing with the questions you ask. If the answer to a question does not lead you to take a specific, identifiable action – particularly when the score is very high or very low – challenge whether the question merits inclusion. Avoid the impression of being cavalier with customers’ valuable time.
- Ensure there is as much emphasis on the qualitative as the quantitative. Give customers the opportunity to express themselves rather than feel boxed in by the questions asked. Also genuine voice of the customer comments will provide far more ‘a-ha’ moments for enhancing the service provided and encouraging innovation than score-keeping ever will.
- Avoid forcing an answer where one doesn’t exist. Researchers appear terrified that if they provide a ‘don’t know’ or ‘neutral’ option, people will use it the whole time. But those who would cop-out in that way will also be those who answer randomly if no such option exists. Far better to know that – and exclude them – than for their indifference to be masked by a completed but inaccurate answer. Also a ‘Don’t Know’ or ‘Don’t Care’ is far more revealing about knowledge and priorities than forced agreement or disagreement.
- Make it fun. One of the advantages of a research technique like conjoint analysis (in which respondents are asked to trade off different combinations of attributes) is that it is genuinely engaging for interviewees. It also reveals their true priorities, often to the surprise of participants, so they find out a little bit about themselves in the process. For customers it will be a good experience if they feel the research provides a mirror for them to look at themselves.
- Finally, balance any research on features with that on benefits sought. Asking customers what features they would like and designing to that specification will frequently yield lemons – they are not experts in your business. But they are experts in what they want to achieve (e.g. saving time, saving money) and what costs them most time and most money currently. They can define the problem that needs solving but only you can design the optimal solution.
Jack Springman is head of corporate advisory group at consulting and systems integration firm Business & Decision.
Replies (4)
Please login or register to join the discussion.
This is an excellent article by Jack Springman.
I have had numerous experiences, like many other readers I'm sure, where I receive a call from someone conducting a telephonic survey on behalf of another company. I participate, not because I am really willing, but because I am interested to see how they handle the interview. Every time I am shocked at how poorly these interviews are conducted. In fact, I shouldn't even call them an interview. They are simply people who have been hired to read a bunch of questions, get a bunch of answers and get some commission in the process. It is clear that they have no real interest in my answers, and don't give a hoot if I am extremely dissatisfied with the company they are conducting the survey for. They talk with monotone voices, they show no care, and quite frankly, they destroy the brand value of the company they are representing.
Perhaps companies need to check out the calibre of people who will be conducting their research before they commission a research house to do their next customer survey. I wholeheartedly agree with Jack. The survey process is very much part of the customer's experience of the company for whom the survey is being done. I would recommend that only those who truly understand customer experience management should ever be allowed to touch or talk to your precious customers. Well done Jack. Regards, Samantha (www.customer-mind.com)
Thanks for your comments, Samantha.
The customer survey experience that particularly sticks in my mind comes courtesy of Southern Electric, which recently used an automated system to undertake a lengthy customer service survey - with my answerphone.
I'm sure I'm not the only one who feels slightly insulted when you answer a call to find that it is an automated questionnaire.
My answerphone, on the other hand, seemed to strike up a real rapport with SE's automated machine ;-)
I also liked this article. I’m in the customer loyalty field, and am always interested to see how organisations tackle customer intelligence and feedback. As a result I now find myself more eager to respond to online surveys and questionnaires. I read this particular article with great interest, and found myself relating to many of the comments made. In particular I can relate to rule number three - making sure respondents have the chance to have their say, instead of being confined to multiple choice answers. I wrote a post on our blog recently about a comment I made on a survey from Apple - ( http://blog.syngro.com/post/2010/05/05/Is-Applee28099s-customer-satisfaction-slipping.aspx). Another typical example of organisations seeking feedback - but what are they actually doing with it? Do we think that they see it as a tick the box exercise – are they too focused on the quantitative score, rather than the person behind it? (www.syngro.com)
Thanks Samantha and Julie for your comments - I have forwarded the link to the company concerned and their European customer research manager is going to be contacting me!
Neil - an automated questionnaire and your answering machine, you couldn't make it up! Think Southern Electric have shot to the top of the dire practice in customer research table. The truth is, there are probably even worse examples out there. Scary!