Making the most of your A/B testing

Related Articles


We’re the Best Company to Work For in London


We are RocketMill: This is Katy Kaminarides


Making the most of your A/B testing

A/B testing plays a huge role in digital marketing when it comes to website and page design in ecommerce. When it comes to maximising conversation rates, the ability to compare two versions of a single variable to see which one performs better can be invaluable for marketing managers. How can you make the most of the A/B tests you run, though?


“It is a capital mistake to theorise before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” – Sherlock Holmes, A Scandal In Bohemia

In order to avoid falling into the trap described by Sherlock Holmes above, you must collect every piece of data you can about your website’s performance before trying to change anything about it. It is easy to unconsciously allow bias to affect your judgment as to why your site is underperforming. The facts must be used to create a theory, not the other way around.

Your analytics platform is a good place to start collecting this data – for instance, you can look for key pages where there is a high drop-off rate and concentrate on improving that drop-off rate. Where do you want the user to go from this page? What do you want them to do while they’re on it? Once you have worked this out, you can start devising experiments. Using the data to identify areas that could be improved is the first step for any effective A/B test.

However, while analytics can tell you what is happening, you need to find out why it is happening. This means you need to gather qualitative data in the form of surveys or usability testing to ascertain why people aren’t checking out or signing up. At the same time, you can find out what they did like about the site so you know which elements to leave alone. Collaborate with your Sales and Customer Service teams to build up a picture of what’s working and not working on the site as far as the customer is concerned.

Macro or micro conversions?

The ultimate aim of every ecommerce website is to drive signups and sales – these are macro conversions. However, due to the difficulty of yielding significantly positive results here, it is usually much more beneficial to test micro conversions, which occur more often, to identify roadblocks in encouraging site visitors through the sales funnel to that macro conversion.

A micro conversion, for instance, might be a click through to a product detail page from the full range page – this would indicate a customer’s interest has been sufficiently piqued by the layout, design and details on the full range page to move to the next stage of the funnel. While this step may not directly result in a macro conversion, the user journey as a whole will have been improved and be reflected in an increase in macro conversions in the future. Just make sure you continue to measure the number of macro conversions you get so you’re ultimately always on an upward curve.

Always use common sense

It’s easy to fall into the trap of testing for the sake of testing – if you see some improvements with your first few tests, it can be tempting to start trying to tweak everything to try and find the perfect page. Only test the variables you believe will have a significant effect on your site, rather than those whose difference (if positive) might be negligible. Additionally if traffic is low it should be obvious that you only test one variable at a time – if you are testing more than one with a low traffic site, the test will take to long and you  won’t be able to draw a significant conclusions.


We have created a downloadable checklist of 10 questions that you should ask yourself whenever you sit down to plan a CRO roadmap.

  1. What am I ultimately trying to achieve?
  2. How does this test fit into the overall CRO testing plan?
  3. Which data am I using to work out what needs testing?
  4. Is my hypothesis rooted in fact?
  5. What specific result am I expecting from this test?
  6. Will that result affect the work of any other department?
  7. How am I going to measure success (or failure) of this test?
  8. How long am I going to run the test for?
  9. If I am testing a micro conversion, how is it going to positively impact a macro conversion?
  10. Do I have a next step if the test does not yield positive results?

What questions would you add to our checklist? Let us know via @RocketMill.