Testing

Start A/B Testing Today with 6 Simple Steps

Disclosure: Our content is reader-supported, which means we earn commissions from links on Crazy Egg. Commissions do not affect our editorial evaluations or opinions.

You’ve likely heard the phrase, “If you’re not doing A/B testing, you’re guessing” many times while reading various marketing blogs.

This is applicable to a wide range of activities, and it’s certainly the case for website improvements and conversion rate optimization.

As Kathryn Aragon points out here:

The trouble is, if we aren’t testing, we’re fooling ourselves. We don’t really know what works. We’re just guessing.

That’s why for many site owners, testing new changes before implementing them is a no-brainer. We treat it like an essential step in the process — because it is.

But it’s important that you don’t treat the testing step as simply another box to check. There’s a big difference between running a test and running an effective test.

If you approach the testing process as simply a way to validate your own guesses before moving forward, it’s easy to draw inaccurate conclusions.

And in some ways, this approach is just as bad — if not  worse — than guessing.

For example, let’s say you have a great idea for new copy on your main call to action buttons. You create a variation with that copy, then run a test for three days.

At the end of those three days, you notice that your new button generated 5% more clicks than the original version.

That’s great! But…

What if your test only reached a few dozen people, and some of them were already planning to convert before they even saw your button copy?

What if your results were simply due to a matter of chance?

Or, worse yet, what if your new call to action copy was generating clicks from lower-quality leads?

With this kind of approach, all of these scenarios are entirely possible.

Now, think about what would happen if you used your newfound “insight” to implement your new copy as a site-wide change. This would not only be a waste of your time, but you’d wind up disappointed by the nonexistent (or negative!) impact.

Of course, this isn’t to say that testing is a waste of your time. It’s still an essential step in the optimization process.

But you need to be mindful of taking an approach that will yield, accurate, helpful results for your site.

Not convinced?

In one Econsultancy survey of 800 marketers, 82% of companies with a structured approach to testing and optimization saw improvements to conversion rates.

And for those who didn’t take a structured approach, only 64% reported seeing any sort of improvement.

It’s clear that with the right strategy, testing can have a major impact on your site’s success.

That’s why in this post, I’ll go through the basics of creating an accurate A/B test, then walk you through your first one in five simple steps.

What is an A/B Test?

An A/B test is one of the simplest methods for testing different ways to improve your website and generate more conversions.

It involves identifying a page on your site that you want to change in order to achieve a specific goal. In most cases, your goal will be to increase clicks on a specific conversion-focused element.

Then, it requires creating two variations of that page (Variation A and Variation B), with some sort of change between the two. This change might be a different color button, a new image, or rewritten copy.

Then, you divide your traffic between the two so that different visitors see different versions simultaneously. This way, you don’t have to worry about time of day or day of the week impacting your results.

Once your test has run for a set period of time, you compare the results of both variations and determine which was more effective in producing your desired result.

A/B Testing Examples

Here’s a simplified visualization of the process from Optimizely:

ab texting

As you can see in the above screenshot, the only thing that’s different between the original and the variation is the color of the “Buy Now” button.

In this hypothetical test, the original page with the gray button generated a 1% conversion rate during the test, for a total of $1,000 in sales. Over the same time period, the variation with the red button saw a 4.5% conversion rate, for a total of $4,500 in sales.

Assuming that this test ran for a significant amount of time, the results indicate that permanently switching that button to the red version would yield more conversions than leaving it gray.

So instead of making this change and hoping for the best, the site owner can now make a data-backed decision.

Now, let’s take a look look at another visualization from VWO.

ab testing example

In this example, Variation A has an orange header, and Variation B has a green one. After dividing traffic equally between the two, Variation A generated a 23% conversion rate, and Variation B generated a 11% conversion rate.

That’s a significant lift, and if implemented sitewide, it could make a huge impact on this site’s conversion rates.

Of course, this is a hypothetical test — but real tests can produce similar results.

For example, let’s take a look at a case study from BEHAVE.org (formerly WhichTestWon.com).

In this test, the site created two almost identical versions of a landing page, where the only difference was the button copy. The button on the first variation said, “See Product Video” while the button on the second said “Watch Demo.”

Everything else on the page remained the same in both variations.

This is a pretty minor change, right?

Sure.

But this minor change produced very different results.

Version A, with the “See Product Video” call to action, increased form fills on the page following the button by 48.2%.

ab testing example 2

That’s a pretty serious impact for only altering three words.

And after implementing this change to show the revised call to action to all of their traffic, this company could expect a significant lift in conversions.

But if they hadn’t run a test, and simply guessed that the “Watch Demo,” button would get a better response, they would’ve essentially left money on the table.

Of course, that’s not to say that the call to action they wound up with was ultimately the best possible option.

After all, they only tested two variants — and the number they could potential test is virtually limitless.

If this company wanted to test more of those variations, they could continue running simple A/B tests, pitting two calls to action against one another at a time.

But they could also speed up the process with split testing.

Split testing follows the same process as A/B testing, but involves more than two variants. So if you have multiple ideas you want to test at the same time, this is often a good option.

For example, when SproutSocial wanted to increase their trial signups, they tested four variations of a banner.

ab testing example 3

Each one had the same color scheme, call to action, and offer. In this case, it made sense to test all four variations at once, since there weren’t any major changes between them.

If you have a large enough audience, this can be a great way to speed up the testing process. And if you already know that you want to test multiple variations against one another, you don’t need to do so one at a time.

That being said, a test can only be considered an A/B or split test if you only change one element on each page.

For example, if you create five variants of a page, each with a different color call to action button, this is a split test.

But, if within those five variants, the buttons are different colors and use different copy, that’s a multivariate test.

Multivariate tests can also be helpful for learning about audience and improving your site. But they’re a bit more complex to set up and run correctly than tests with only one change per page.

So in this post, we’ll keep things simple by sticking to A/B and split testing. But you can learn more about multivariate testing here.

Another thing to keep in mind as you develop your tests is that the elements you can work with aren’t limited to buttons or banners.

Although the examples we’ve covered so far focused on generating clicks on a call to action, you can test virtually anything on your site.

And in order to get the best results, you should be testing other elements on your site.

In another example from WhichTestWon, a non-profit wanted to increase their donations.

They decided to run an A/B test in which the only variable was the list of gift amount options. On the original page, visitors could opt to donate $100, $150, $200, or $300. On the variant, they could select from $110, $160, $210, or $310.

Each page also included an option for users to manually enter an amount not included on the list. So neither version was preventing visitors from donating any amount they wanted — they simply altered the list of pre-set amounts.

So, did this make an impact?

It sure did.

The variation with the slightly higher donation options increased the average donation amount per person by 15%, increasing overall revenue by 16%.

ab testing example 3

In both variants, users went for the second-smallest option, even though they could’ve easily entered their own donation amount in the box at the bottom of the list.

As a result, the organization was able to increase their revenue, without actually limiting donors.

This turned out to be an excellent optimization for this site — but it’s important to keep in mind that the results here are unique to their audience.

If you’ve spent any time researching your options for A/B testing, you’ve likely come across dozens of case studies about how other site owners have achieved significant increases in conversion rates with their tests.

So, let’s say you also want to encourage visitors to make donations on your site. You read this case study, and the results are clear.

Should you immediately change your donation options and increase them each by a small amount?

Nope.

You might use that case study as inspiration for a future test, but you should only ever make permanent changes based on your own results.

The fact that another site owner’s visitors responded positively to a specific change doesn’t mean that your visitors will, too.

In fact, making changes based on another site’s results negates A/B testing’s biggest advantage: The ability to test your ideas on potential customers.

After all, it doesn’t matter what any other company’s customers respond to positively. The only audience you need to worry about is your own.

And not all audiences respond to changes the same way.

For example, many site owners consider it a best practice to add “social proof” to signup forms. Your “proof” might be your number of existing subscribers, your number of current customers, or even a positive client testimonial.

The idea is that when a user sees that other users like them trust your company, they’ll be more confident in their decision to purchase or work with you, too.

So when looking at this example, it’s safe to say that most marketers would guess that Version A (with social proof) would generate more email opt-ins.

But you can probably see where this is going, right?

In an A/B test, Version B generated a 122% increase in subscriptions. That’s over twice as many new email subscribers — and with a form that doesn’t follow a commonly-accepted “best practice.”

So as you come up with optimization ideas and create new tests, remember to always keep the focus on your target audience.

That’s the only way you’ll get the kind of results that can help you improve your site’s performance and increase your revenue.

And with that in mind: Let’s get started.

The the following six steps will help you develop, launch, and measure an effective A/B test for your website.

1. Determine your goal

Before you create a test, you need to know what, exactly, you’re hoping to accomplish.

And this isn’t a time to be vague.

If your goal is simply to “improve your website” or “increase revenue,” you’ll have a tough time figuring out where to start with your changes. And at the end of your test, it will likely be difficult to say for sure whether your test was successful.

Instead, be specific about exactly which actions on your site you want to generate.

Do you want to get more clicks on a homepage call to action? Earn more subscribers for your email list? Or generate more purchases your ecommerce store?

Great.

These are all definable, testable metrics.

Of course, your goals will vary based on your business model.

For example, a B2B company will likely have different A/B testing goals than a B2C ecommerce store. The former might want to focus on driving engagement, while the latter would be more concerned with increasing on-site purchases.

But in both cases, these companies could come up with helpful optimization goals by starting with their main business objectives.

After all, your site is ultimately a tool for helping your business grow. No amount of clicks or pageviews really matters if it doesn’t move you closer to company’s goals.

So if you’re not sure which metrics to focus your tests on, begin there and work your way down.

First, identify your main business objectives. In most cases, this will be increasing product sales, user signups, or client contracts.

Next, determine your most important website goals. These are likely closely related to your business objectives.

For example, if you run an ecommerce store, your main goal is generating online purchases. And if you run a service-based business, you probably want to get potential customers to contact you.

From there, you’ll need to identify your site’s key performance indicators, or KPIs. These are the results that signal how well your site is doing in help you reach your goals. They might be form submissions, completed purchases, or email signups.

At this point, you’ll need to narrow in on one KPI for your first test. Although you may track several KPIs on your site, you should focus on one at a time when optimizing your site.

Finally, determine a target metric you want to achieve with your test. This will be a specific number related to the KPI you’re focusing on.

So if you’re looking to improve your number of leads, your target metric might be 6% conversion rate for form submissions on your contact page.

If you’re not sure how to determine an appropriate number for this, it’s perfectly acceptable for your goal to be to improve upon your site’s current performance.

And if you’re not already measuring your KPIs, you’ll want to spend some time digging into your analytics to set an accurate benchmark.

First, make sure you’re tracking the KPIs you want to improve as Google Analytics goals. If you’re not, you can follow Google’s step-by-step instructions here.

After you start tracking these goals, it’s easy to measure how many of your users are completing them. You’ll have access to reports for each that look something like this:

ab testing tools

In this report alone, you can see how many conversions your site is generating for a specific goal, broken down by the hour, day, week, month, or any other time frame of your choosing.

You can also access your data in many different ways to get a deeper understanding of how your site generates conversions. The most helpful in this case will be to break down your data by page, then look at the current conversion rate of the page you want to improve.

ab testing google analytics

In this case, the page’s goal conversion rate is 56.71%. If the site owner were to run an A/B test, their goal might be to create a variation that generates a conversion rate of 65%.

This might sound like an impossibly high conversion rate.

And for most major conversions, it is.

But it’s important to note that you don’t always need to focus your tests on major conversions like sales and lead form submissions.

While these have the clearest immediate impact, they’re not the only goals that matter for your site. And in most cases, they aren’t the first actions that a visitor takes.

Instead, most users will first make a series of “micro conversions,” or smaller actions, that prepare them to complete a “macro conversion,” or one of your major goals.

These micro and macro conversion pairings vary by site, but the following chart shows what they might look like for ecommerce, SaaS, and app websites.

For example, an ecommerce shopper is unlikely to visit a company’s store for the first time and make a purchase immediately.

First, they’ll browse category pages to see if the store has what they’re looking for. Then, they’ll visit a product page (or series of product pages) to learn more.

They might read reviews, check out different buying options, and even watch product videos if they’re available. Then, if they find something that meets their needs, they’ll add it to their shopping cart.

That’s six actions so far — and they still haven’t made a purchase.

But each of those actions can be considered a micro conversion, because they all get the user closer to buying.

In the chart above, however, all of the macro conversions in the left column center on driving immediate purchases.But as this next chart shows, there are also plenty of micro conversions that lead up to goals that don’t result in immediate revenue, like trial signups and form completions.

ab testing conversion funnel

For example, many service-based B2B companies don’t generate sales directly on their websites. Instead, their sites are designed to bring in contact form submissions or quote requests.

But even though a form submission doesn’t require an immediate investment from a visitor, most users will spend some time researching a company before contacting them.

They might browse a few testimonials, read through some case studies, and check out pricing information before deciding whether to visit a contact page.

So even though the actual goal completions might only happen on that contact page, users likely wouldn’t make it that far in the first place without first making a few micro conversions.

Keep this in mind as you set your A/B testing goals.

It can be tempting to focus solely on high-impact metrics like sales and form submissions. And while it’s certainly worth looking for ways to generate more of those conversions, don’t lose track of the less exciting actions that precede them.

After all, a user has a zero percent chance of completing a purchase if they never add a single product to their shopping cart.

Take a big picture approach to how users move through your site, and you’ll be much more successful in driving them to ultimately take the actions that help you reach your company’s goals.

2. Decide which page to test

Once you’ve determined a goal you want to accomplish, you’ll need to select a page to start optimizing.

In some cases, this will be obvious.

For example, if you want to increase your percentage of completed purchases, it makes sense to focus your efforts on your checkout page.

And if you want to increase submissions of a form on a specific page, your answer is pretty clear.

But for other goals, your starting point will be less obvious. This is especially true if you’re looking to increase conversions for which you have calls to action on multiple pages.

In general, you’ll want to prioritize pages that have the potential to make the biggest impact. After all, increasing conversion rates by just one or two percent on pages that see the most traffic can often make more of an impact than bigger jumps on low-traffic pages.

If your goal is to make changes that will see the largest possible audience, your first thought might be to focus on your homepage.

Logically, this would have the biggest impact on your overall conversion rates. And while this is true if your homepage receives more traffic than other pages on your site, that’s not always the case.

So before you select a page, spend some time digging in Google Analytics.

First, you can see your site’s top landing pages, by navigating to Behavior > Site Content > Landing Pages. This report will show you your site’s most common entry points.

In the screenshot above, this site’s top landing page is their homepage. So if they wanted to optimize for a conversion that takes place early in their sales funnel, this would be a logical starting point.

But traffic levels alone can’t tell you if a page is worth optimizing.

If you’re looking to make a big impact, you’ll also want to identify pages with lots of room for improvement. So if your highest-traffic page is already achieving a high conversion rates, spending your time running a test might not be worth the incremental change it produces.

Instead, you look for pages that aren’t yet generating the results you want. These will typically be pages with low conversion rates or high bounce rates.

You can find these pages by navigating to Behavior > Site Content > All Pages.

This report will show you key metrics about each page on your site, including pageviews, average time on page, bounce rate, and conversion rate.

You can comb through this report to see if there are any clear issues with certain pages. Sometimes, pages will immediately stand out as low performers.

But you can make the process even easier by using the comparison tool.

Once you turn on this tool, you can select a metric, then see where your pages stack up according to that metric.

For example, if you want to figure out which pages aren’t great at encouraging visitors to stick around, you can compare your pages by bounce rate.

ab testing analytics

The red and green bars in this report make it easy to identify low-performing pages. In this case, the red bars signify pages with above-average bounce rates.

So if you’re looking to increase micro conversions that encourage visitors to learn more about your company or products, one of these pages might be a great starting point.

And if you’re looking to increase conversions for a specific goal, you can also compare your pages based on goal conversion rate.

If you notice that one page is generating disproportionately low conversions for an important goal, running a few tests could be a great way to uncover high-impact changes.

Finally, if you’ve set up funnels for your conversion goals in Google Analytics, your funnel visualization report can provide a wealth of data on where you’re losing customers.

When you create a goal funnel, you tell Analytics how you expect your visitors to move through your site before converting.

In this example, users on an ecommerce site need to move through a checkout page, shipping page, billing page, and payment page before completing their purchase.

Your funnels will likely vary a bit for each of your goals, but once you’ve created them, you can see exactly where you’re losing visitors in your funnel visualization report.

In this example, we can see that only 2.93% of visitors are moving from the homepage to the next page in this site’s funnel. This shows that the homepage has a lot of room for improvement.

With less than three percent of their total traffic making it to the second step in the funnel, their conversion potential is significantly cut right from the start.

Fortunately, of the visitors that do proceed to the “Why Join” page, a healthy 67.84% move on to the next step. So this page is already effective in getting users closer to the goal.

But from there, only 25.78% of those users actually complete their registration — meaning that this page also has some optimization potential.

So as you look at your funnel visualization report, focus your attention on pages with large drop offs.

If you notice a that large percentage of your traffic isn’t moving from one key stage of your funnel to the next, optimizing that page could have a positive impact on your entire funnel.

3. Select elements to A/B test

At this point, you should know which page on your site you’ll be testing, and what you hope to accomplish with your test.

Now, you need to decide exactly what you want to test on that page.

Tools like Google Analytics can help you identify where your problems lie, but not what’s causing them.

And that’s why testing is so important. You can try creating variations of any element on your page, then test to see what works.

There are dozens of elements you might consider, including:

  • Your copy
  • Your call to action text
  • Your call to action’s visual appearance
  • Your call to action’s location on the page
  • Your form fields

I could go on and on. There are so many things to test.

But as you weigh your options, focus on changes that will impact how your users interact with the page.

You should also make sure to test an element that affects the goal you’re hoping to achieve. For example, testing fancy new graphics or colors in place of old ones might help you increase the amount of time your visitors spend on a page.

Another option is to A/B test different button colors. Parallel Project Training used this method to determine whether their website visitors preferred to click call-to-action buttons in the original brand color (a deep purple, shade #660080) or a green color (#149414).

They ran their test for four months and found that the uplift in Variant 1 (green) over the Control (purple) was 38.15%.

There is plenty of evidence to show that website design elements trigger emotional changes in visitors that can increase conversions and that certain colors can boost conversion rates.

But if your goal is to increase conversions, that might not be the most effective way to do so.

You should also aim to make significant changes, especially if you’re in the early stages of optimizing your site.

While there are countless case studies out there on button colors, the color of one element is unlikely to be the determining factor in whether a user decides to convert.

After all, if your website visitors don’t understand how they could benefit from working with you or what they can expect from clicking a button, they’re not going to convert — no matter what color that button is.

Plus, if you’re new to A/B testing, it’s almost certain that there are other, more important changes you could be making.

So your time is better spent altering parts of your page that play a role in the decision-making process.

And, sure: Once you’ve made the kind of significant changes that help your users move more easily through your pages, and encourage them to convert at higher rates, you can get a bit more granular with testing button colors.

But you should save those smaller tests for after you’ve identified solutions to the more pressing problems on your site.

Unfortunately, this is not how many site owners approach their testing process. In one MECLabs & Magento survey, 38% of respondents said they developed their tests based on “intuition and best practices.”

While your intuition might be helpful for other life decisions, it’s not what you want to rely on when coming up with site improvement ideas.

Fortunately, there are a few ways to gain data-backed insight into which elements play a role in whether your visitors take a desired action.

One of the best is running a heatmap test on the page you want to optimize.

Heatmap tests show you where visitors are focusing their attention on a page — and where they’re not.

They collect click data from actual visitors on your site, then create a visual representation of those clicks on a screenshot of your page.

Areas that receive lots of clicks “glow.” The warmer the color, the more attention that area received.

Areas that don’t see a lot of engagement, on the other hand, remain dark.

As a result, a typical heatmap looks something like this:

In this example, we can see that the majority of the clicks on the page are going to links in the main navigation bar and sidebar, as well as an email signup form. Considering that this is a information-based page, these results make sense.

But if this page were supposed to generate a specific conversion, these results would show a serious problem.

For example, let’s take a look at how the clicks are distributed on this service page:

Although the page provides lots of information, it doesn’t give visitors a clear indicator of how to begin using any of the services listed.

That’s apparent in the lack of click concentration on the page, other than in the menu bar.

If you find that a page on your site that’s supposed to be generating a specific action has a heatmap that looks like this, it’s a clear indicator that you have some work to do.

Ideally, your heatmap for important pages should look more like this example’s, with their one prominent call to action button:

Although there are still clicks going to the menu bar, users are clearly taking action by clicking the call to action button — exactly what the page is intended to encourage them to do.

When you run heatmap tests, you can see whether users are paying attention to the elements you want them to. In some cases, you’ll find that it’s unclear what action you want them to take.

And in other cases, you’ll see exactly what’s distracting them from your desired goal.

For example, take a look at this heatmap for an old landing page on Nurse.com.

ab testing tools heatmap

This page was designed to generate clicks on a call to action for the company’s continuing education courses. In the screenshot above, it’s the green button instructing users to “Buy Unlimited CL Membership.”

But after analyzing this heatmap, the tester found that users were:

  • Dividing their clicks between two competing calls to action
  • Clicking on non-clickable elements
  • Being distracted by less-important links
  • Ignoring the main copy on the page

Then, they used this insight to create a variation of the page to test against the original.

In the variation, they consolidated the competing calls to action into one button and removed the links that were distracting visitors.

They moved the most important information on the page to the upper left of the page, since this space was already attracting a decent amount of engagement — even though the elements there weren’t clickable.

Now, let’s take a look at the heatmap for that revised page:

After implementing those changes, the vast majority of users focused their attention on the most important content.

And as a result, Nurse.com saw a 15.7% increase in sales from the page.

Once you’ve spent some time digging into your user testing results, you can create a hypothesis for why your traffic isn’t converting at the rate you want.

For example, if you determine that the problem is a less-important link distracting users from your main call to action, your hypothesis might be that creating a variant without that link will lead to more conversions.

ab testing problem

Then, you can use this hypothesis to guide your changes.

As you come up with ideas to test, remember that while smaller changes make for simpler tests, they’re unlikely to deliver impactful results.

Aim to develop hypotheses that will change how users interact with your brand and move through your site. When it comes down to it, running one effective test is much more impactful than a series of ineffective ones.

So although getting a new test up and running every few weeks might feel more productive than spending a significant amount of time developing a page variant, it’s important to strive for impact over frequency in your tests.

In fact, sites with the highest conversion rates aren’t running the most tests.

As this study shows, sites with the highest number of total tests actually achieve some of the lowest conversion rates.

Although constantly testing every last element on your site in quick succession might seem like the best way to maximize your conversion potential, that’s not the case at all.

So, which elements should you be testing?

You’ve probably noticed that I’ve referenced calls to action multiple times in this post.

That’s because they’re some of the most impactful elements on your site. They’re what tell your visitors exactly what you want them to do.

Without calls to action, none of your visitors would become customers. And without effective calls to action, only a tiny portion will.

Fortunately, calls to action are also some of the easiest elements to test.

For example, when CityCliq wanted to increase conversions for their web design services, they decided to test a few different calls to action.

In addition to their original copy, “Businesses grow faster online!” they came up with

  • Create a webpage for your business
  • Get found faster!
  • Online advertising that works!

And here’s how those variations performed:

The “Create a webpage for your business” saw a 47.8% conversion rate — which was a 90% increase from the original version.

They almost doubled their conversions by changing six words on that page.

Calls to action make a huge impact because they let you convey value to your visitors. In this case, this business’s audience was most persuaded by a straightforward description of what they were offering.

So if your copy doesn’t make it immediately clear how a user can benefit from taking action, this is an excellent place to run a potentially high-impact test.

This includes the copy on your buttons, especially if you’re currently using generic copy like “Submit.” If this is the case on your site, you might run an A/B test with button variations that look like these:

In addition to the copy on your buttons, you also might consider testing where they’re placed on the page.

For most sites, it makes sense to place the most important elements “above the fold,” or within a user’s first view of your site.

If your call to action is above the fold, it will appear on their screen as soon as they land on your page. They’ll know exactly what you want them to do right from the start.

But if your call to action is below the fold, they won’t see it until they scroll down. And considering that not all of your visitors will scroll, some of them may never see the action you want them to take.

For example, on one of Unbounce’s landing pages, they started with three calls to action below the fold.

At the top of the page, there was a call to action instructing users to “Pick your plan below.” This button used a scrolling effect to move visitors down to the pricing plan options.

But in order to increase conversions, they created a variation that moved the individual plan calls to action to just above the fold.

ab testing example results

As a result, they saw a 41% increase in conversions on the page.

Of course, as with any conversion rate strategy, moving your most important call to action above the fold isn’t always the right move.

In fact, in this next example, the site achieved a 220% increase in conversions by moving their call to action below the fold.

So if your most important elements have always been in the same place on your pages, moving them around in a variation might yield some surprising results.

You can also improve the way you convey value and encourage visitors to take action by optimizing the forms on your site.

The best way to maximize your form submissions is to make your forms as easy as possible for users to fill out.

In most cases, this means keeping your number of fields to a minimum.

For example, one contact form on NeilPatel.com originally contained four fields:

  • Name
  • Email address
  • Website URL
  • Revenue

This was already a relatively small form that most visitors could likely fill out in about 30 seconds.

But when Neil wanted to boost conversions for this action, he removed the revenue field and tested a form that looked like this:

As a result, he saw a 26% increase in conversions.

So, what happens if you have an even larger form to begin with?

If you take an approach like Imagescape, you could see an even bigger lift in conversions by removing some of your fields.

When the site wanted to improve their conversion rates, they began with a form that had 11 fields. Then, they created a variation with only four.

This new form caused their conversion rate to jump from 5.4% to 11.9%, for a 120% increase in form submissions.

Although this meant they were now collecting less information about each individual lead, the were increasing the total number of leads in their funnel.

This is something you’ll need to consider when optimizing your forms. What is the minimum amount of information you need for each type of conversion on your site?

For your email subscription form, you might cut it down to just one field for the user’s email address. But if you’re collecting quote requests or client inquiries, you’ll likely need a little more information to evaluate whether the user is a qualified lead.

But if you could accurately assess your leads with less data than you’re currently collecting, cutting out a field or two could be a helpful test for your site.

As you optimize your forms, it’s also important to consider the text surrounding them.

While calls to action, form fields, and buttons typically have the largest impact, the other copy near your forms can also help users decide whether to take action.

For example, this Shopify form reassures users that the tips and resources they’ll receive by signing up for their list are completely free, and that they can unsubscribe at any time.

If a user is on the fence about signing up, this could be exactly what they need to read to eliminate their hesitation.

4. Design your test

By this point, you’ve set a goal, selected a page, and determined what you want to test.

Now, it’s time to create and launch your test.

Fortunately, this step is easier than it might sound.

First, you’ll need to develop the “creative” part of the test. This might involve rewriting copy, coming up with new calls to action, or redesigning a graphic.

Next, you’ll need to select the testing platform you want to use. There are many tools available for this, but if you’re new to A/B testing, you’ll want to select one that makes it easy to get your test up and running.

So for this post, we’ll focus on two platforms that offer straightforward visual editors.

The first one to consider is Google Optimize.

If you’re not yet sold on the value of A/B testing, this could be the perfect way to get started. Running tests on the platform is completely free  — so your only investment will be time.

To get started, navigate to Optimize’s signup page and click “Sign Up for Free.”

ab testing form

After you create your account, you’ll need to enable Optimize on your site by adding a small snippet of code.

If you manually installed Google Analytics on your site, this step is as simple as copy-and-pasting an additional line into your existing tracking code.

Then, set up your account details, give your first test a name, and create your variants in their user-friendly visual editor.

Simply click the element you want to change in your variant, select, “Edit Element,” and make the changes you want to see.

This requires minimal HTML knowledge, and if you’re only changing one element on each page, you can likely create each variant in a minute or so.

To learn more about launching a test Google Optimize, you can check a full walkthrough of the platform here.

Another platform to consider is Crazy Egg’s A/B testing tool.

ab testing tool crazy egg

If you’re already a Crazy Egg user, you can start running A/B tests with this tool at no additional charge. This platform also includes a visual editor to make creating variants a simple process.

Log into your account, select “ “A/B testing” in your dashboard navigation bar, and click “Add new test.”

Give your test a name, then open the visual editor and click the “Add Variant” button to create your first variant.

From there, simply click on the element you want to change. Then, determine whether you want to edit the text and style, change the element’s color, or remove it from the variant altogether.

Repeat this process for each variant you want to test.

Then, regardless of which platform you choose, you’ll need to set an objective for your A/B test.

In Google Optimize, the default options are decreasing bounce rate, increasing pageviews, and increasing session duration.

Any custom goals you’ve created in Google Analytics will also appear in this list. So if you’re already tracking the type of conversion you want to optimize the page for, this makes measuring your success easy.

In the Crazy Egg A/B testing tool, your default options are “Sell more products,” “Get more registrations,” “Generate more leads,” or “Get more page views.”

ab testing crazy egg tool

Even if you don’t yet have custom goals set up on your site, these options enable you to measure your performance for important KPIs.

Finally, you’ll need to determine how much of your traffic you want to see each variant.

If you’re running a simple test with two variants, you’ll likely want to divide all of your traffic evenly between the two.

And if you have more variants, you can simply divide your traffic evenly.

But if you’re running a test where you’re unsure of how well a specific variant will perform, you may want to allot a smaller portion of your traffic to that variant.

For example, if you’re testing a call to action that’s designed to be attention-grabbing, with bold language or a touch of humor, it’s tough to know how your audience might respond.

Maybe they’ll love it!

But maybe not.

And if this call to action is on a valuable, high-traffic page, that might not be a risk you’re willing to take for a large portion of your audience.

Instead, you can start by only sending 5-10% of your traffic to that variant. Then, if you see positive results, you can update your settings to direct more of your traffic to it.

Since your results are based on conversion rates, and not total number of conversions, it’s still possible to get accurate results with varying portions of traffic to different variations.

Plus, if you run your tests with Crazy Egg, the platform will automatically adapt to show the variant with the best conversion rate to the highest portion of your users.

This way, you can continue to learn about what your visitors like (and what they don’t) — without sacrificing conversions.

5. Let your test accumulate data

Once you’ve launched your test, you need to let it run for a long enough period of time to collect significant data.

This is often the most difficult part of running an A/B test for many site owners.

After all, you just spend all this time learning about your users, finding opportunities for improvement, and creating new page variants.

Now, you want to see the results of your efforts!

That’s understandable.

But running an effective test requires a bit of patience.

So go ahead: Log into your testing account and check out your results during your test.

Just do not (I repeat, DO NOT) stop, pause, or edit your test until it’s complete.

After a few days, you might see results that point to one variation as a clear winner. This is extremely common.

But you should never make decisions based on these early results.

For example, take a look at these initial landing page split test results, where the gray line represents the original page’s conversion rate, and the blue line shows the new variant’s.

ab testing tool 1

If this site owner had checked their results at any point within the first few days, they would’ve seen that the new variant was dramatically outperforming the original page. It was achieving almost double the conversion rate!

But as you can see, as the test continued to run, those lines slowly converged until there was virtually no difference at all in conversion rate.

This is a common occurrence with early test results.

It even holds true with tests that involve multiple variants — like this example from ConversionXL with five landing page variants.

During the first few days of this test, the third variant (represented in the above graph by a blue line) emerged as a clear winner. But within a few weeks, the conversion rates of all the variants were nearly identical.

There was no clear winner, and this site owner had no conclusive evidence that implementing one of those variants would have an impact on their results.

Now, consider what would’ve happened if the person running this test had stopped running their test within the first week.

They would’ve concluded that the third variant was by far the most effective in generating conversions. Then, they likely would’ve spent hours creating a permanent version of that variant and implementing it on their site.

And after all of that effort, they’d be very disappointed when their winning variant didn’t actually have an impact on their conversions.

Stopping too early would’ve led them to waste time and money on a variant that didn’t make a difference in their site’s performance.

Plus, in this example, the conversion rates ended up being the same. But in some cases, the results can reverse over time.

Although stopping that particular test too early would’ve resulted in no change at all, stopping some tests could actually lead to you to choose a variant that performs worse than your original.

So, why do these changes occur?

Early in your testing period, your results are calculated based on a relatively small pool of users. This means that each individual user has a much larger impact on your overall results.

In those first few days, even one or two visitors who behave unusually can make a big difference in your conversion rates.

But as you accumulate more views on your tests, your results will average out to reflect the majority of your audience’s behavior. The larger your test audience, the more accurate your results will typically be.

Of course, you can’t run your tests forever. You need to stop and evaluate at some point in order for them to provide any real value to your site.

So how can you choose a good stopping point?

Some site owners choose to run their tests for a certain period of time. And it is a good idea to run your tests for full weeks at a time.

ConversionXL suggests that if you begin a test on a Monday, to try to end that test on a Sunday.

Why does this matter?

Many sites see huge differences in user behavior on different days of the week. For example, take a look at the variation in conversion rates in this site’s Google Analytics report, broken down by day of the week.

ab testing google analytics results

Their ecommerce conversion rates dropped from 4.26% on Thursday to 2.43% on Saturday. That’s a 75% decrease in two days!

So if your test only includes particularly high⎯ or low⎯performing days, your results could easily be skewed.

Plus, there’s a chance that your audience varies a bit by day, too. For example, if you run a B2B service company, you’ll likely generate most of your leads from established companies during the week.

After all, that’s when key decision makers and other employees are likely searching for business-related services.

But on the weekends, you might attract more leads from startup founders or small business owners who run their companies in addition to a full-time job.

These are very different types of leads — and you should run your test long enough to account for both.

But does this mean that the best way to determine a stopping point is to set a target time frame of one or two full weeks?

Nope.

For the best results, I recommend determining a target sample size, or number of users you want to interact with your test, before launching.

This way, you can be sure that a significant portion of your audience has interacted with it.

There are plenty of sample size calculators you can use to figure out an appropriate goal for your site. They’re all fairly similar, and most look something like this one:

ab testing tools 2

First, you’ll need to set your confidence interval. This number represents the margin of error you’re comfortable with for your results.

For example, let’s say that 80% of your visitors select one variant in your test. If you use a confidence interval of 4, this means you can be “sure” that between 76% and 84% of your total visitors will also choose that option.

Next, your confidence level represents how sure you can be that your results will fall within your confidence interval.

So, sticking with that same example, if you set a confidence level of 95%, this means you want to be sure that 95% of the time, 76-84% of your visitors will choose your winning variant.

Finally, you can use the population field for the total group your sample is intended to represent, which is your entire target audience.

If you don’t know how large your audience is, or if your site’s traffic tends to vary, you can leave this field blank. Your total audience’s size generally will not have a significant impact on a test’s accuracy, unless it is extremely small.  

In the above example, I used a 95% confidence level and a 4 for the confidence interval. I left “population” blank.

Based on this information, I’d need a sample size of at least 600 users.

But this doesn’t mean that as soon as 600 visitors see my test, I can just see which variant performed best and implement it on my site.

Although many of your tests will produce a “winner,” you still need to determine whether your results are statistically significant.

If you’re unfamiliar with statistical significance, it’s essentially a way to determine whether you can be confident that your results are an accurate prediction of future results.

In other words, how sure can you be that your lift in conversion rates isn’t just a fluke?

As a caveat, do not attempt to calculate statistical significance before reaching your target sample size.

For example, take a look at these results from a ConversionXL test two days after launch:

ab testing metrics

The test had already reached 237 users, and the variant was performing extremely poorly. It was generating almost 90% fewer conversions, with a zero percent chance of beating the original.

Fortunately, this tester know that a sample size of 237 was not large enough to reflect the site’s total audience, so they let the test run for another ten days. 

1,000 visitors later, the variant that had a zero percent chance of beating the control was generating 25% more conversions, with a 95% chance of beating the original.

So even if they’d taken all of the appropriate steps to measure significance with those early results, their conclusion would’ve been completely wrong.

Waiting to hit your target sample size requires a bit of patience, but it’s worth it to achieve results that you can be confident about.

And once you’ve reached enough visitors with your test, calculating statistical significance is easy.

There are plenty of tools you can use to plug in your data and get an instant answer, like Kissmetrics A/B significance test.

Simply enter your total number of visitors and number of conversions for each variation.

ab testing tools

Then, the tool will tell you if your results are statistically significant, as well as your confidence level.

ab testing results

In this case, variation B won with a 97% confidence rate. Given that most testers aim for 95%, this is a statistically significant result, and one that the site owner could be confident about using to inform a permanent change.

It’s important to note, though, that not all of your results will be statistically significant.

In fact, a lot of them probably won’t be.

For example, when Groove wanted to increase clicks on their “Sign Up” button, they decided to run a popular type of test with three different button colors.  

They saw no significant change in conversions.

Then, they tried slightly altering the copy on the button, from “Sign Up,” to “Sign Up Now.”

Some site owners have reported success with this type of change, and assume that adding a time-related word creates a sense of urgency.

But for Groove?

Inconclusive.

Finally, they tried testing slight variations for their listed monthly price.

Lots of sites have published case studies with positive results from this type of change, since even small differences can make an impact on how users perceive cost and value.

But for Groove?

Again: Inconclusive.

ab testing examples 5

So don’t be surprised if your first few tests don’t produce actionable results that will make huge impact on your site’s performance.

In fact, it’s best to approach your initial testing process with the mindset that you won’t immediately identify major changes.

According to ConversionXL, “You are doing well if one in five split tests will improve your conversion rates.”

After all, if it were that easy, every site in existence would be perfectly optimized to cater to move users straight to converting.

When you focus on achieving significant results, your progress in finding new ways to increase your conversions might be slow. But once you do find those changes, you can be confident that they’ll make a real impact for your business.

6. Document or implement your results

Regardless of your results, you have some work to do at the end of each test.

If your test’s results were inconclusive, you might be ready to simply move on to the next one. And while that’s a logical next step, it’s important to take the time to analyze and document your results.

A test isn’t useless just because it doesn’t show you an immediate change you can make to boost your conversion rates.

When you think about it, each one gets you closer to identifying that impactful change. When you test an idea and find that it doesn’t make a significant difference, you can cross that idea off of your list and move onto the next.

And when you document the fact you’ve already tested it, you can avoid running a similarly ineffective test in the future.

This is particularly important if there are multiple people who play a role in working on your company’s site. Without documentation, others on your team might not realize that you’ve already tried a specific test.

They might spend valuable time running a similar test — and wind up getting similar results.

Plus, you can learn from each hypothesis you create, and develop better hypotheses as you move forward with the testing process.

That’s why in one study, companies who based changes on historical data were shown to be the most likely be successful with their tests.

ab testing optimization

That’s because instead of making wild guesses as to what will help their site achieve higher conversion rates, they’re making data-backed decisions to develop their hypotheses.

Then, they can approach A/B tests as a way to confirm or reject the ideas they’re already fairly confident in.

This is a much more effective approach than creating variants for every element you can think of, then using A/B tests to see whether any of them make a difference.

So, how can you document your results to help you improve your testing efforts in the future?

The most important consideration here is what works for you and your team.

Even if you’re the only person running tests on your site right now, that could easily change in the future.

Plus, if you’re reporting to a more senior employee at your company, it’s also in your best interest to be able to illustrate your findings in a clear way.

With that in mind, it’s pretty clear that a spreadsheet of random statistics about controls and variants won’t do you any favors.

Your goal should be to document each of your tests in a way that’s easy for anyone who reads them to understand.

But this doesn’t need to be a complex process.

First, start by taking complete screenshots of each variation. In this example, an outdoor retailer tested two different versions of a promotional offer.

ab testing examples

Next, document the exact change you made, and why you made it.

In this case, the retailer wanted to see if switching from a percentage-based discount to a dollar-based discount would increase conversions without lowering the site’s average order value.

Finally, document how your test impacted your main goal, as well as any other target metrics it affected.

In the outdoor retailer’s test, that looked like this:

Simple, right?

Anyone could read this report in a under a minute, and understand the test’s key takeaway.

Beyond this, it’s up to you what you decide is important to include in your reports. For the sake of creating effective future tests, you’d likely want to include:

  • Date range
  • Total traffic
  • Total conversions for each variant
  • Additional metrics impacted

Then, as you continue to run tests on your site, you can archive all of your results in one place.

Much like documenting your individual results, this process can be as simple or as complex as you want it to be.

Most testing platforms, including Google Optimize and Crazy Egg’s A/B testing tool, save all of your test results to an archive.

But if you run tests on multiple platforms, or have multiple users running tests in different accounts, this can quickly get confusing.

Creating one easily-accessible location for results simplifies the process of accessing past tests. This can be as simple as a shared Excel or Google Sheet, like this example from Kissmetrics:

ab testing cta

This sheet includes a hypothesis for each variant, as well as details about the copy, color, size, and shape.

Then, it includes traffic and conversion data, along with whether the test was significant, and any other conclusions the tester drew as a result.

This provides plenty of information to draw from for future tests.

But you can also document your tests using just about any tool you prefer. ConversionXL, for example, documents their results in a Trello board.

ab testing trello

If you’ve already identified an organizational tool that works well for your or your team, you can likely use it to archive your test results.

The bottom line is that you need to choose something that’s convenient — so that when it’s time to create your next test, it’s easy to gain insight from your past results.

Conclusion

A/B tests can be an extremely valuable way to learn about your audience’s preferences and adjust your site to be more effective in helping you reach your most important goals.

Developing and launching a test doesn’t need to be a complicated process, either. So once you have a basic framework in place, you can continue to run tests and make improvements.

After all, no matter how satisfied you are with your site’s performance, there’s almost always room for improvement.

A/B tests can tell you exactly what those improvements are.

When it comes down to it, there’s no reason not to be continuously running tests. Each one will give you a bit more insight into your audience’s preferences, and get you closer to creating a site that’s perfectly tailored to their needs.

How do you use A/B testing to improve your conversion rates?


David Zheng is an alumnus and former contributor to The Daily Egg.

Make your website better. Instantly.

Over 300,000 websites use Crazy Egg to improve what's working, fix what isn't and test new ideas.

Free 30-day Trial