SE Technology A/B testing, Split testing

SE Technology Has and is in the process of implementing A/B Split testing for many client to help website owners improve conversion and generate more responses from your website and online presence. If you are not familiar with A/B testing read below to get more information and get familiar with A/B Testing. We have also posted link to online resources on A/B Split testing.

About A/B Testing

A/B testing, split testing is a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test Website samples in order to improve website response rates. A classic direct mail tactic, this method has been recently adopted within the interactive space to test tactics such as banner ads, emails and landing pages.

Significant improvements can be seen through testing elements like copy text, layouts, images and colors. However, not all elements produce the same improvements, and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest improvements.

Employers of this A/B testing method will distribute multiple samples of a test, including the control, to see which single variable is most effective in increasing a response rate or other desired outcome. The test, in order to be effective, must reach an audience of a sufficient size that there is a reasonable chance of detecting a meaningful difference between the control and other tactics: see Statistical power.

As a simple example, a company with a customer database of 2000 people decides to create an email campaign with a discount code in order to generate sales through its website. It creates an email and then modifies the Call To Action (the part of the copy which encourages customers to do something – in the case of a sales campaign, make a purchase). To 1000 people it sends the email with the Call To Action stating “Offer ends this Saturday! Use code A1″, and to another 1000 people it sends the email with the Call To Action stating “Limited time offer! Use code B1″. All other elements of the email’s copy and layout are identical. The company then monitors which campaign has the highest success rate by analysing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales campaigns.

The Plain Truth

In the example above, the purpose of the test is to determine which is the most effective way to impulse customers into making a sale. If, however, the aim of the test was to see which would generate the highest click-rate – i.e., the number of people who actually click onto the website after receiving the email – then the results may have been different. More of the customers receiving the code B1 may have accessed the website after receiving the email, but because the Call To Action didn’t state the end-date of the promotion, there was less incentive for them to make an immediate purchase. If the purpose of the test was simply to see which would bring more traffic to the website, then the email containing code B1 may have been more successful. An A/B Test should have a defined outcome that is measurable, e.g. number of sales made, click-rate conversion, number of people signing up/registering etc.

Conclusion

This method differs from multivariate testing, which applies statistical modeling by which a tester can try multiple variables within the samples distributed.

Send a Comment