Email marketing: The beginner's guide to A/B testing


Lucy Acheson provides an overview of A/B testing including guidance, advice and a list of potential pitfalls to avoid.

The joy that digital and direct communications can offer a marketeer if done right, stems from the empirical insight which is made available through the tracking of consumer behaviour after each and every encounter with them. Measurability is the lifeblood of what we do and should sit squarely at the centre of all communication planning.
Campaign optimisation born through measurement is now an imperative as businesses are under pressure to treat different types of people differently. To do this, brands need to identify the most effective route that will create meaningful connections between them and the consumer. Woe betide the brand that says they can’t afford to test, as those in the know would argue you simply can’t afford not to test and optimise. Indeed, it is music to our ears when a client says, “OK let’s test each route”, signalling their humility in allowing consumer behaviour to dictate strategy.
Why test?
And at the forefront of that measurability and metrics battle lies the modest but brilliantly effective A/B or split test. I wasn’t sure how I would use my biology degree in my chosen career in marketing when I left university, but obviously proving that something works better than a variation on a theme is as important when studying the heredity of pea plants as it is in email marketing. The humble A/B test allows for objective campaign planning and strips away the dreaded HiPPO (High paid person opinion) school of decision making, allowing brands to communicate with their consumers in a quantitatively proven manner.
It forms the back bone of the ever evolving strategic roadmap, in which brands and marketers constantly battle to refine previous thinking and improve the results they can coax from their customer bases and websites.
Walk into any marketing department in the land and ask the first person you see, “Which is the best day to send out an email?” I guarantee you will get seven different answers, or maybe more. For this question, and any of the other recurring ones such as “what shall we put in the subject line” and “should we direct consumers to Amazon or the on-line shop”, the only way to prove the answer categorically is to test one option against another in a head to head response-off! An A/B test is a clean and clear experimental approach which tests two variations of an email to a statistically valid subset of a target audience focusing on just one altered element. The result can quickly be acted upon and rolled out to the remaining target audience with confidence, knowing that on the day, Route A was preferred to Route B.
It really works
A good example of an A/B test in action can be shown with work we did recently for Philips mother and baby brand AVENT. We created an email campaign to launch a new breast pump, with two different versions. Version one focused on recommendations for the product from other mums while the second version included a discount of 25% off the product. Discounts cost money so we wanted to evaluate the difference between offer and no offer and see if any increase in response would make the extra cost worthwhile. 15,000 consumers were sent the first message and the same number received the special offer email.
Both emails were blasted simultaneously. We found that the offer uplifted response by 50%, but we didn’t stop there. We then ran another A/B test with ‘exclusive discount’ in the subject line and this uplifted results by a further 6%. Quantitative substantiation had been achieved on the best creative route to take and the subsequent results spoke for themselves in terms of elevated response metrics.
Avoiding mistakes
It sounds simple and the beauty of an A/B test is that it is! However, mistakes can be made. The core of which is that A/B tests need focus on just one changed variable, if more than one thing is altered then it isn’t possible to isolate which change is responsible for the change in consumer behaviour: you have learnt nothing. In addition your A/B tests need to be carried out at exactly the same time. This ensures that the results can be attributed to the altered variable(s) and not external environmental response influencers. Time of day, day of week, market place conditions, economy, etc.
Secondly, a classic error is not calculating how big a cell size needs to be to run the initial split test. Use a tool, or an analyst, to calculate confidence levels based on the proposed cell size and your expected response rate. That way the result can be relied on and opportunities aren’t missed. A 95% confidence level is the norm for email marketing purposes.
Thirdly, allow your test to run its natural course. There is a temptation to dive in and grapple with the figures as soon as they start to come in, but an A/B test should be measured over the same time period as you would normally track an email campaign for. If you know that historically you have received 80% of your response in 48 hrs, then let the test run for a similar time frame, once again ensuring statistical robustness.
Another pitfall to be avoided centres on ensuring that once a test has taken place and you are ready to roll out your optimised message to the bulk of your target audience, ensure that the conversion funnel is primed and ready to receive the increased volume of traffic. This is especially true at peak sales or response periods throughout the year. There is no point in working hard to optimise response, if the backend management systems such as call centres, websites and e-shops aren’t stocked and primed and ready to receive the volume of consumers you are about to drive towards them. There is no bigger turn off for a consumer than arriving at a website, excited and ready to spend some money, and finding that the product and service isn’t available. In essence, think past the marketing strategy and include other stakeholders in planning your campaign.
Don’t reinvent the wheel
As a small housekeeping point, store and name your test campaigns in a structured way. You are expending energy and budget to learn something. Those learnings should be made available to all concerned to avoid the corporate marketing amnesia we all face, as campaigns come and go and personnel change on a regular basis. A results database is something we always recommend to ensure that the insights garnered are methodically stored and that the same ground is not covered needlessly; unless of course the environment has changed sufficiently to merit a retest!
The only other key thought is around that fact that once you have treated a consumer in a certain way, then that consistency should be maintained from thereafter. Tracking tools and campaign management systems available today should be able to track who has seen what, and deliver content accordingly. So for instance, if you personalised an email subject line for me and then made changes to a website you were driving me to, which lead me to respond or behave in a good way, don’t take these facets of my experience away next time you speak to me.
Lastly, continue to test discretely, always striving to enhance the consumers experience and elicit the behaviour you seek. Today’s outright winner in an A/B test will be tomorrow’s control cell. Harsh but a reality in the ever evolving consumer landscape facing us as marketers in 2013 - 170 years since Gregor Mendel used A/B tests to develop his ground breaking laws on genetics for pea plants, changing the world forever. Who knows, following a methodology he pioneered, your statistically robust and quantified email campaigns might do the same!
Lucy Acheson is head of data planning at WDMP.


Please login or register to join the discussion.

30th Mar 2013 09:54


I believe mobile marketing will play important role in upcoming years. Even in last 2 years i have seen many companies doing Bulk Text Marketing  and getting lot of benefit from that.  Many business integrate there systems/apps with SMS Text API  too boost customer interaction. 

Thanks (0)