DON’T MISS OUT

Get updates on new articles, webinars and other opportunities:

7 Important A/B Testing Rules to Follow to Raise Conversion Rates

by Joseph Putnam

A/B testing is gaining in popularity and is something a lot of businesses are considering as a way to generate more revenue.

The reason is simple.

By improving conversion rates, websites generate more orders at a lower cost per acquisition. This means better results for the same amount spent on advertising. It’s really a no brainer.

The hard part is getting a good grasp of A/B testing and understanding the rules and principles you need to follow in order to improve conversions. This article presents seven rules of thumb the top conversion optimizers follow in order to get better results.

#1: Allow your test to run for at least 7 days

The first is to allow your test to run for at least seven days.

The reason is that A/B tests can change very quickly. One variation may jump out to an early 350% conversion boost by day two and even be ruled statistically significant by your A/B testing software only to cool down to a 15% boost by day five. To account for these changes, you need to make sure and let your test run for at least seven days.

testing tips 1

Another reason to test for a longer period of time is that website traffic varies from day to day. Saturday traffic, for example, can be very different from Monday traffic. Based on that, you want to make sure to get results from every day of the week before calling a winner.

You should also keep in mind that even seven days is really a short time period for an A/B test, and you may be better off letting it run for a minimum of fourteen days just to be sure, something Neil Patel recommends in this post. In the end, you’re looking for a winner that will get long-term results and don’t want to pick a winning variation too soon only to find out it doesn’t actually boost conversions or revenue.

It’s also a good idea to allow tests to run until you have at least 100 total conversions. More than that is even better and less can work, but running until there are at least 100 conversions will help to give you more confidence that the outcome is accurate and will deliver the results you’re looking for.

#2: Run tests until you have a 95% confidence level

The next rule to follow is to run your test until there’s at least a 95% confidence level for the winning variation.

The reasons for this rule are the same as those for rule number one. First and foremost, you’re looking to pick a winning variation that will give you better results for the long term. This means you want to make sure the results are statistically significant and that you don’t pick a winner prematurely.

Another reason is that test results can change dramatically over the course of an A/B testing period. I’ve personally seen a variation jump out to a 105% boost in conversions after a day and a half only to lose when the test is called 10 days later. This makes it even more important to wait until your A/B testing software says the results are statistically significant.

To get a better idea about how long this will take for your test, you can use this simple A/B Test Sample Size Calculator from Optimizely. Calculators like this one make it easy to determine how long you’ll need to run the test at the current level of conversion improvement before getting statistically significant results.

testing tips 2

You’ll also want to keep in mind that the smaller the conversion boost, the longer the test will need to run, and vice versa. As such, if the improvement is only 5%, then you’ll need to run the test much longer than if it’s a 50% improvement.

#3: Big changes lead to bigger results

Another rule of thumb to keep in mind is that bigger changes have a greater chance of leading to bigger results.

If you change the headline or button copy on your homepage, for example, you might improve conversions, 10%, 15%, or 25%. This doesn’t mean you shouldn’t test those elements, it just means you shouldn’t expect to get really big improvements from doing so.

testing tips 3

But if you make a drastic change, that’s where there’s opportunity to get really big improvements.

Say, for example, that you have a SaaS business but don’t currently offer a free trial. You set up a Qualaroo survey on your site and get several questions from people asking, “Do you have a free trial I can use?” Realizing that at least some percentage of your visitors are interested in a free trial, you decide to test and see how it improves conversions and revenue.

So you go to work, set up a way for people to sign up for a free trial, and then run a test to measure the results. After one month of testing, you find out that free trial improves conversion by 125%. Awesome! Let’s go ahead and implement that free trial!

The thing to remember is that bigger changes like this have a greater likelihood of leading to big conversion wins, which means you may want to consider some bigger tests to run and not just headline, button text, and website copy changes. It takes more work to implement, but in the end it’s worth it to take some risks, test a bigger change, and then wait to see the results.

#4: The compound improvement of conversion wins

A lesser known conversion rule is that improvements increase in a compound way.

This means that four 25% improvements lead to a 144% increase in conversions and not just a 100% improvement. 100% is really good, but the math shows that the way improvements compound reveals that you’ll get even better results from a series of improvements than just adding the percentage increase together. (You can check the math on this by multiplying one by 1.25 four times and then subtracting one from the result.)

What’s the takeaway? Essentially, even a series of small improvements can have a big impact. You definitely want to test big changes that will have the opportunity of significantly improving conversions, but four 25% improvements or six 15% improvements will also have an impact on your bottom line.

#5: A/B testing doesn’t mean just making one change at a time

This is probably the biggest misunderstanding I see people have when it comes to A/B testing. They think you need to measure the difference every little change makes which means you need to test one small change at a time, but this couldn’t be further from the truth.

The reason is that you’ll never be able to get anywhere if you just make one small change at a time. Yes, you won’t know as well whether factor A, B, or C impacts the results, but you’ll never be able to test big changes that get big results if you don’t test more than one change at a time.

One way to fix this is to run an A/B/n test. Instead of just running variation A against variation B, you can also add in variations C and D to see how they impact results. You can test just a headline change in variation B, a headline and sub-title change in variation C, and a new headline and sub-title change in variation D. You can have as many different variations as you’d like, just keep in mind that each new variation will require your test to be run X% longer before you find statistically significant results.

Multivariate tests are another way to test more than one change at once, but you’ll want to make sure you have enough experience with A/B testing before attempting to tackle a full-fledged multivariate test. You’ll also need to make sure you have enough traffic to select a winner because multivariate tests require a lot of traffic to select a winner.

#6: Macro conversions are more important than micro conversions

In the end, you always want to be measuring the results that are the most significant for your business, i.e., macro conversions.

Let’s say, for example, that you’re attempting to further improve conversions at the SaaS company mentioned above. The sign-up involves three critical steps: 1) Clicking “Start Free Trial on the homepage, 2) Entering information on the sign-up page, 3) Eventually signing up for a paid account.

Which of these do you think is the most important? Obviously, it’s getting customers to sign up for a paid account. This means you don’t want to just test whether or not the headline and homepage copy convinces people to click the Free Trial button. You also want to know whether it gets more people to sign up for a free trial and gets more people to sign up for a paid account.

Based on this, you want to measure the impact on both free trial and paid account signups whenever possible. This may seem counter-intuitive because you might think, “If more people click through to the second step, doesn’t that mean more people will sign up for a free trial, and if more people sign up for a free trial, doesn’t that mean more people will sign up for a paid account?”

The answer is no, and I’ve seen multiple tests where one variation increased conversions from step one to step two, but a different variation increased improvements to step three which was the final step in the conversion funnel.

This seems counter-intuitive, but you want to make sure to measure macro-conversions for test results because the winning variation from step one to step two won’t always be the winning variation for the final leg of your funnel.

#7: Testing eliminates assumptions (and disagreements)

One of the best things about A/B testing is that it eliminates assumptions and disagreements. You may assume that headline A will improve conversions when, in fact, headline B gets better results. In the same way, a colleague may hate headline B and ask why it would even be tested, only to find out later that it gets better results.

The lesson here is to always be testing. By doing so, you’ll be forced to test your assumptions and to make sure each change improves conversions.

You might be certain that a new pricing page will boost conversions, only to find out it doesn’t, or you might argue for three weeks about the best headline variation with your co-workers. A/B testing is the best way to solve all of these problems and to make sure you consistently make your site better.

The Value of A/B Testing

In the end, A/B testing is one of the most valuable marketing practices you can undertake. Most businesses spend all of their money acquiring traffic and not nearly as much as they should on improving conversion rates. This is a bad idea because you can only improve when A/B testing is carried out properly and will never go backwards.

Here’s one final important rule of thumb to keep in mind: When you double conversion rates, you cut your cost per acquisition in half. This means you can spend even more on advertising to dominate your competition, or you can spend that money elsewhere to build your business.

If you follow the rules of thumb from this post, you’ll be better prepared to double conversion rates and lower your cost per acquisition. This may take an entire year and twenty or more tests, but in the end, it’s totally worth it if you’re able to double your results from paid advertising campaigns.

Read other Crazy Egg articles by Joe Putnam.

One Comment

DON’T MISS OUT

Get updates on new articles, webinars and other opportunities:

Joseph Putnam

Joe Putnam is the Director of Marketing at iSpionage, a competitive intelligence tool that makes it easy for PPC advertisers to download their competitors’ top keywords, ads, and landing pages. He also recently wrote a free guide titled The Top 10 PPC Mistakes and How to Fix Them.

ONE COMMENT

Comment Policy

Please join the conversation! We like long and thoughtful communication.
Abrupt comments and gibberish will not be approved. Please, only use your real name, not your business name or keywords. We rarely allow links in your comment.
Finally, please use your favorite personal social media profile for the website field.

SPEAK YOUR MIND

Your email address will not be published.

  1. Mehak says:
    January 4, 2016 at 4:17 am

    Great List Sir…thanx for sharing

Show Me My Heatmap

Currently looking at @CrazyEgg reports and understanding them. @lorenagomez would be so proud! LOL!

Nicholas Love

@NicholasJLove