analytics
istock

The six stages of a successful conversion rate optimisation test

by
13th Feb 2017

If you’re running a conversion rate optimisation (CRO) test, your problem is likely form submissions - the lack thereof. Users who abandon ship without converting are the bane of any marketer’s life, but they’re not a fact of life. At least not all of them. You’re not going to get a 100% conversion rate, let’s be honest. But there are certainly things you can do to bring your form submissions up.

Deciding to run a CRO test is the first step, congratulations. Running it successfully is the next, and slightly larger, hurdle.

Let me say one thing first: embrace this as a science. Don’t launch straight in, full steam ahead. Instead, let’s break down the steps in the process – so you, and your clients, know you’re doing the best possible job. 

If your CRO test isn’t on the money your conclusions won’t be either. There’s nothing more awkward than seeing your form submissions continue to plummet after you’ve implemented the ‘solution’.

There are six stages to an effective CRO test:

  1. Assess the situation.
  2. Develop a testable theory.
  3. Choose your testing method.
  4. Choose your test groups.
  5. Draw a preliminary conclusion.
  6. Action, repeat, evolve.

Read on for a walkthrough guide to CRO testing – and there’s a bonus list of the five top CRO testing traps at the end too, to help you avoid the most common pitfalls.

Stage 1: Assess the situation

There are any number of problems that your website could be experiencing that are causing your low conversion rate.

  • Maybe your user journey is convoluted.
  • Maybe your load time is horrific.
  • Maybe your content appears irrelevant.
  • Maybe your site is just plain ugly.

Your website is like a shop – if it’s cramped, poorly lit, and visitors can’t find what they want, they’re not likely to hang around.

So how do you find out where the problem is?

There are numerous methods you can use to assess the situation. Use as many as possible, so you can build a comprehensive picture to aid your decision-making. Many a CRO test has failed by failing to assess the situation accurately to start with.

Direct Feedback

Direct user feedback has to get a look-in before we consider any other methods. All testing methods are trying to better understand what users do on your website – what better way to find out than go directly to the source?

You can gather user feedback in a number of ways. Pop-ups are always effective, whether they’re asking for feedback or offering live help – so you can discover problems as they happen. You could also go out to volunteers and ask them to review your site based on a series of questions or tasks.

Don’t solely rely on direct user feedback though. All of these methods should be used in combination, so far as is possible. 

Google Analytics

Trusty Google Analytics, our old friend. You can use Google Analytics to get a more holistic understanding of your form submission levels. By giving you a huge range of data, you can build more of a picture of the reasons behind your conversion rates rather than simply looking at those rates in isolation.

For example, you can analyse your traffic sources and see if there’s been a particular change in traffic that might make your forms less relevant. You can see if there’s been a sudden change and compare that to other events, such as changes in your website.

Good CRO testing allows you to see how your online landscape is changing, so you can evolve with it. Ensuring you stay relevant is the best way to maximise your chances of converting users on site.

In-page analytics

Although in-page analytics might not give as broad a picture as Google Analytics, it’s still very useful in your quest for better understanding. You can use it to understand the path users are following when they’re on your website. This can help highlight problems such as a CTA that doesn’t stand out, a shopping cart people can’t find, or a distracting graphic that pulls people away from the conversion path.

It’s worth noting, though, that in-page analytics are less effective if you have multiple links directing to a single URL per page. If this is the case, heatmaps can be a more effective tool to use.

So, you’ve assessed the situation and you think you’ve identified the problem? The next thing is to develop a testable theory based on your understanding so far.

Stage 2: Develop a testable theory

This is where your past experience comes in. Take the problem you’ve identified, and think of some possible causes of that problem. For example, if your in-page analytics have identified users clicking away from your form you might posit that there’s a distracting graphic positioned badly on the page.

Google Analytics might show you that your conversion rate has dropped off to coincide with a website redesign using a new font, and posit that a font increase will increase readability and therefore conversions.

The point is, you’re trying to isolate the factor you think is having the biggest impact on your conversion so you can test it. Although the picture is likely to be more complex than this, you can’t test more than one at a time because you won’t know what’s having the impact.

Say I decide to change my main CTA on one page. To test that theory, I need to keep everything the same except the CTA. So I need to run tests on the same page, with the same font sizes, the same layout and so on. Even if I think the text is also having an impact, I need to isolate the CTA for now so I can get a true result of the effect that’s having on my conversion.

Provided I keep everything the same when I test the CTA, I can then see whether or not changing the CTA has a notable (and positive) effect on my conversion rate.

Stage 3: Choose your testing method

After you’ve decided on a testable theory, you need to choose a testing method. Split testing is the simplest method, allowing you to test a single variable against your control – that is, your original – version.

If you’re testing something slightly more complex with multiple variables then multivariate testing is probably a better choice. Instances where this would be more appropriate include testing elements such as text sizes, calls to action or colour.

Testing

 

Multivariate testing is useful because it allows you test multiple ideas, giving you more useful information.

For example, say you wanted to test whether your existing grey text was impacting your conversion rate. You could run a split test using the control grey against the variable black. This might show that black is a runaway winner, leading you to change all your text black.

A: Grey (7% conversion)

B: Black (12% conversion)

However, say you run a multivariate test of the same. You could test the control grey against red, black, dark grey, and so on. Black might well still outperform grey, but dark grey could outperform them both.

A: Grey (7% conversion)

B: Black (12% conversion)

C: Red (2% conversion)

D: Dark Grey (19% conversion)

By using multivariate testing, you’ve given yourself greater insight and can increase your conversion rate yet further by making your text dark grey. You’d never even have known that was a possibility if you hadn’t used multivariate testing.

It is important to limit the options on a multivariate test though. If you use too many variations the changes are likely to be so small as to be impossible to detect a pattern. You also have to have enough traffic to support multivariate testing, as you need a broad amount of traffic tested on each option to give meaningful results.

You don’t need to test everything at once. Keep your testing simple and make incremental changes to your site over time.

Stage 4: Choose your test groups 

We’ve mentioned above that you need enough traffic to test each variable and get meaningful insights, but how should you split your traffic for best?

The groups receiving each test variable (including control) need to be the same size for comparative purposes. However, the mistake many people make is making this split ‘randomly’. This can lead to what’s called ‘diversity bias’, meaning that you end up with groups that aren’t representative of the diversity of your audience.

 

Testing

(Source)

For example, say I take a sample of 100 of my audience, and randomly split them into two groups of 50 each. Imagine my business has ten different user personas. By splitting my groups randomly, I’m not ensuring both groups are balanced and testing all of my personas. Group B might be made up of only two personas, for example. 

How diverse your audience is must be reflected in the samples you choose. If you have ten different user personas, it’s not meaningful to only test half of them. The size of the sample you test will need to be large enough to capture your full audience demographic, in order that you can draw relevant conclusions.

Stage 5: Draw a preliminary conclusion

I say preliminary for a reason. Conversion rate optimisation should be a constant process – you can’t just ‘get it right’ and forget about it. Your online environment is constantly evolving so you it’s important to stay on top of changes.

One of the most productive ways to draw preliminary conclusions is to set up custom goals (if your split testing tools allow you to – if not, consider moving to ones that do!).

You should create a custom goal for every action a user might take on your site. Think of anything you’d want to analyse using in-page analytics as these won’t work during testing. Custom goals are a way of tracking behaviour during testing, so you can draw conclusions as appropriate. For example, you might set up a goal to track responses to a pop-up.

Most split testing tools enable you to link your testing back to your Google Analytics account, creating a custom variable so you can analyse the results of your test. You can use this to see which version of your test was most successful.

 

Testing

Stage 6: Action, repeat, evolve

Once you've run your test campaign for a while, you can start identifying meaningful patterns in order to reach a conclusion. How long is ‘a while’? It depends, but generally no less than a few weeks. You need to collect enough data to identify trends before you can draw conclusions.

There are two possible scenarios here. Either your original testable theory was correct, or it wasn’t. For example, if I posited that users weren’t clicking through because they couldn’t easily identify the CTA. Either my testing will have shown changes in CTA to have a significant impact on conversion, or it won’t.

If your testable theory proves to be correct, the answer is simple: action the appropriate changes.

If it isn’t, go back to the drawing board and develop another testable theory. Run your tests again with these changes, until you reach a conclusion you are confident in. You could even try applying your existing test to different pages, to see if your conclusions are page-specific.

The principle with CRO is this: don’t try to wedge a square peg into a round role. If your data doesn’t support your initial theory, you need to have the humility to change the theory. It’s generally not the data that’s wrong!

Again, this is why experience is important. Knowing the CRO ropes will mean you can posit an intelligent guess as to what’s impacting your conversion rates, saving you time constantly exploring options.

So that’s how you run a conversion rate optimisation test. The biggest take-home message should be that this is an evolution. It’s not something you do once and forget about. The ‘right’ answer won’t always be the right answer. Be flexible, nimble and ready to move with your audience and you’ll be better placed for success than businesses who rely on ‘what’s worked will always work’.

Coming to the end of our step-by-step CRO testing guide, we wanted to leave you with a word (well, list) of warning. Learn from the mistakes other marketers have made…

Top five CRO test traps to avoid

  1. Audience issues. Touched on above, this is one of the biggest mistakes marketers make. You have to have enough traffic to gather meaningful insight, and you need to split that audience appropriately to get representative insight. Otherwise the solutions you implement won’t solve the initial problems.
  2. Copycat theories. Looking at what your competitors are doing is a valid method, in all areas of marketing, but it’s not something you should rely on. There’s no right and wrong answer when it comes to CRO and your website is unique. The theories your competitors test might be completely irrelevant to you. You should develop your own theory based on your own conversion funnel.
  3. Tired theories. While CRO is a science, the people who do it aren’t scientists… Many of the problems in CRO testing comes from the people leading the charge – either copying theories, as above, or even worse relying on tired theories without critical or intelligent thinking. The worst culprit is design. While design often does have an important role to play, many poor CROs will jump straight to design as if this is the only possible theory out there.
  4. Not testing for long enough. You go to great care to select an audience that is large enough and representative enough… but if you don’t test for long enough you can’t draw meaningful conclusions. Again – think patterns and trends, not instant data. Only by giving your campaign time can you reduce the impact of outliers – and reduce the risk of making bad business decisions based on them.
  5. Stopping testing. As we said above, CRO testing should be an evolution. Don’t just implement changes and then stop. You need to be on top of your audience constantly to see what’s working and what’s not – that’s the only way to stay ahead of the curve with your competitors.
Tags:

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.