A/B testing can be as simple as reciting the alphabet.
You take your current web page, called the “control” (A), make a copy of it, and change one element so you have a slightly different design, called the “variant” (B), divide the traffic between the two, then direct all your traffic to the version that has the best conversion rate.
Okay, so maybe it’s not exactly simple.
Most people struggle with their first few A/B tests. They don’t know which tools to use, how to set up a test, or how to know when it’s done.
If you fall into that category, you’re certainly not alone.
Fortunately, you’re in the right place.
In this article, I’ll show you a free A/B testing tool that’s readily available to every website owner. I’ll walk you through the process of creating your account, launching your first test, and monitoring your results.
I’ll also give you some guidance designing effective tests, as well as other insight to help you make the most of your efforts to optimize your site for conversions.
Great. Let’s get started.
What is A/B Testing?
Before we get into how to A/B test in digital marketing, it’s important to have a clear idea of what A/B testing (or A/B split testing) is.
To be clear, every site owner should be running A/B tests.
But why are they so important?
The simplest answer is that they’re an effective way to identify possible improvements that can make a major impact on your online success.
A basic A/B test looks something like this:
You create a variation of a page, divide your traffic between the two, and see which is more effective in generating a target conversion (this could be a purchase, a subscription, or whatever goal you want your website visitors to complete).
Then, you use that insight to make the elements on the more effective variation permanent.
The general idea is that by running A/B tests, you’ll determine how to create a page that gets the best results for a specific goal. Then, you can publish it as the page you want viewed by all website visitors.
Simple. But does it work?
Why Should You Run A/B Testing?
The answer to that question depends on how you set up your tests, but it certainly has the potential to.
In one example, ecommerce company WallMonkeys increased conversions by 550 percent by using a combination of A/B testing and heatmapping. With the right goals and approach, A/B testing can have a huge impact on your ability to generate sales and other important conversions.
This is because when you run A/B tests, you have the ability to find out what works for your audience.
If you’ve spent any time browsing digital marketing blogs, you’ve likely seen dozens — if not hundreds — of “best practices” for increasing conversion rates.
Many of these are backed by multiple studies, with site owners who’ve seen success raving about their effectiveness. And in some cases, implementing the same changes on your own site could result in more conversions.
But it isn’t guaranteed.
Your audience is likely made up of an entirely different group of people. So their findings may not apply at all.
For example, adding social proof to forms is generally considered a best practice. The idea is that statistics showing your existing number of subscribers, your customer satisfaction rates, or a certain result your clients have seen can give visitors confidence in your brand.
So when presented with the following options for an email opt-in form, the majority of marketers expected the first variation to perform better.
But after running an A/B test using both, version B produced a 122 percent increase in signups.
It’s impossible to know exactly why this was the case. Maybe visitors were distracted by the extra line of text. Or maybe the promise of “free updates” simply wasn’t compelling.
But that doesn’t matter.
The real takeaway is that with this specific audience, the form without social proof was more effective. This is the kind of insight you can only gain with A/B tests.
Plus, you aren’t just limited to forms and other conversion-specific elements. You can also use A/B tests to determine which design elements your visitors prefer, or which layouts encourage them to spend more time on your site.
For example, in one test, online newspaper McClatchy moved its story photos from the left of the copy to the right.
This seems like a relatively minor change. After all, none of the elements are any different in the variation — they’re simply organized differently on the page.
But the version with right-aligned images saw 20 percent more clicks, 10 percent more traffic, and a higher overall average session duration.
You can also take things further by removing certain elements on your page to see how they impact the way people navigate your pages.
If you think that an element is distracting your visitors from your main calls to action, you can create a variant of the page without that element and see what happens.
For example, when Yuppiechef wanted to increase conversions on a landing page for their online registry service, they removed their navigation bar from the page.
At first, this may sound like a bad idea.
After all, a navigation bar is one of the most important elements of any site. Wouldn’t removing it confuse visitors and drive them away?
Not in this case.
In fact, removing the navigation bar resulted in a 100 percent increase in conversions on the page.
So as you run your tests, you may be surprised by which elements your customers prefer, and what generates the best results for your business.
As it turns out, there’s an entire website dedicated to guessing which variants have won A/B tests — and the reason it’s fun is that it’s really tough to predict the preferences of each brand’s particular target audience.
As you start to see results from your A/B tests, you can be confident that they’re specific to your audience — which is much more helpful than any generic best practice.
Key Things to Test
When it comes to your A/B testing, what you test will depend on the kind of business you run. There are, however, six key elements that all brands should include in their A/B testing.
Your headline should be engaging and actionable. It should address your audience directly. Other than that, the actual content of your headlines is up to you. Test different versions of your headlines to see which one gets a click-through to the actual page or article.
For instance, CRM software company Highrise saw a 30 percent increase in click-throughs when they changed their signup headline to highlight how quick signing up was.
Call to Action Button
If you’re not seeing much engagement with your call to action buttons, or if you think they could be doing better, test their placement.
Are they too far down the page?
Are they suffering from right rail blindness?
Try different placements to see which gets the most clicks.
You can also change the color, shape, size, and font to see what your audience reacts to. Just make sure you test one change at a time. If you introduce too many variables, it will be difficult to determine which is actually having an impact.
Call to Action Copy
If your CTA click-throughs are still lackluster, it could be the copy you’re using. Your CTA copy should make it very clear to your visitor what you expect them to do. It should be like your headline: catchy and actionable. And it should be brief.
Test variations of your CTA copy to see if it helps boost conversions.
If you’re not seeing a lot of conversions, it could be that your sales copy is confusing or unapproachable. Use different versions in your A/B testing.
You may even test the presentation of your pricing.
A Finnish event management software company discovered that people were clicking back and forth between their pricing page and their features page. It seemed visitors were not getting a clear enough picture of what each plan included.
They decided to test a pricing page that listed the features of each price point.
The result was a 93.71 percent increase in click-throughs.
Your product descriptions make sense to you, but do they make sense to your visitors? Keep all the important details of your product, but test different verbiage to see what your audience reacts to best.
Obviously, you can’t test variations of the copy on your testimonials. But you can test their placement. Testimonials usually appear at the bottom of your sales funnel, on pages where people are almost ready to convert. Make sure yours are in the right place to attract and convert visitors.
Why use Google Optimize to Do A/B Tests?
There are many conversion testing tools on the market, but the best are usually paid and add to your marketing expenses.
Google Optimize is an exception to this rule.
It’s a Google Analytics custom A/B testing and personalization tool, which launched in 2016 and is gradually replacing Content Experiments. If you’ve ever used Content Experiments, you’ll find that the interface and capabilities are relatively similar, so using Optimize is an easy switch.
And even if you haven’t yet used a Google tool for A/B testing, you’ll likely find the experience extremely easy.
Creating an account is a straightforward process. So is running an A/B or split test.
In this article, we’ll focus specifically on A/B testing — but if you’re interested in running multivariate split tests, you can do that with Google Optimize, too.
Beyond its easy setup, one of the biggest advantages of Optimize is that it integrates easily with Google Analytics. Google Analytics is considered a standard tool for any site owner — so being able to access test data directly in your account is extremely convenient.
As you run your tests, you can easily analyze visitors who see each variation in Analytics, since experiment KPIs tie right into your account.
Experiment Dimensions will show up as secondary dimension options in your reports, which lets you monitor on-site behavior of different visitors based on the variant they see.
On the flip side, you can also use data from Google Analytics to target the right demographic if you choose to use the paid version of the tool, Google Optimize 360.
Google Analytics Audience targeting lets you target your experiments to key visitor segments based on data like number of site visits, previous on-site actions, and location.
If you want to maximize conversions from a specific group within your audience, this is a great way to focus your tests on those visitors. You can create unique offers tailored to that group, then run tests to see how they respond.
Optimize 360 also includes up to 36 combinations for multivariate testing, up to 10 preconfigured experiment objectives, the capability to run over 100 simultaneous experiments and implementation services.
If you’re an enterprise-level company and looking to get serious with A/B testing, this could be the right option for you.
But the free version offers plenty of features for most small to mid-sized businesses. That’s why in this post, we’ll focus solely on the test options available at no charge.
Now, let’s get started.
Here’s how to launch your first test with Google Optimize, in four easy steps.
1. Prep for Your Tests
There are many things you can learn from an A/B test.
You can run one to determine whether you should focus on a single conversion goal or strive for multiple conversion goals. You could run another one to identify which design elements and messaging are most persuasive for your audience.
There are tons of options, and many of them can produce helpful guidance in improving your site.
But no matter what you’re testing, remember to keep your priorities straight. The end goal of any conversion rate optimization, or CRO, process should be to increase your total revenue.
And although conversions and revenue generally correlate, sometimes they don’t.
Imagine you’ve set up an A/B test to choose the best page design for increasing your subscriber rate. You think that changing the CTA copy to emphasize a free resource will generate more subscriptions.
Your subscription rate goes through the roof — but the design somehow hurts your sales rate and results in lower revenue.
This might make you insane. But do you keep your winning design?
Always choose the page that will increase your bottom line — not just your conversions. Companies run on revenue, not on conversion rates.
The same holds true in virtually any aspect of your marketing strategy.
In this A/B testing example, a company is trying to determine which keyword produces the best results for a PPC campaign.
Based on those initial results, you’d likely conclude that the first keyword is a better option. After all, 50 percent of the visitors it attracts convert.
Compared to the second keyword’s 25 percent conversion rate, this is a significant jump — and results in half the cost per conversion.
Unfortunately, not all of those conversions from the first keyword translate into sales.
In fact, only 10 percent of them do.
Visitors from the second keyword, on the other hand, have a 50 percent sales rate. So although they’re less likely to complete a basic conversion, they’re higher quality leads overall.
Keep this in mind as you conduct your tests.
Although a huge jump in conversion rates is exciting, you should focus on earning conversions from qualified leads.
This is sometimes more difficult than it sounds.
In fact, in one study, WordStream founder Larry Kim found that an increase in conversion rate typically leads to a decrease in qualified leads.
Of course, this doesn’t mean you can’t increase your sales and revenue with conversion rate optimization. If that were the case, it wouldn’t be such a popular tactic — and we wouldn’t be recommending that you run A/B tests.
It just means that you need to have a clear idea of the kinds of conversions you want to increase, and who your target audience is.
So before you start running tests, determine exactly what it is that you hope to accomplish. This will depend largely on your overall goals.
- For bloggers, a single subscription could be considered a conversion.
- For an ecommerce store, a conversion might be a sale, subscription, newsletter sign-up, add to cart, or even an event click.
The conversion goals you set depend on your business model, but make sure that regardless of what they are, they’re designed to help you reach your bigger-picture goals.
Identifying the conversions you want to increase will make it easier to determine what kinds of changes you should make in your variations. Plus, it will help you decide right from the start what kinds of results you want to see.
Without this information, it can be difficult to identify the winning variation accurately after you’ve finished running a test.
The clearer you are with the goal you want to achieve, the easier it will be reach that goal.
And if you’re struggling to pick just one, start by listing all of your ideas in a spreadsheet.
Determine which would have the biggest impact on your business’s goals, and start there.
As you start running A/B tests regularly, this will be a helpful resource to have on hand. But for now, pick one starting point, then move on to step two.
2. Create an Account With Google Optimize
The first step to running tests with Google Optimize is creating an account.
You can do this by logging into your Google Analytics account, then navigating to Behavior > Experiments.
This is where Content Experiments used to be, but Google is phasing them out for Optimize. You can sign up for it by clicking the “Learn More” link.
Alternatively, you can navigate directly to Google Optimize and click “Sign up for Free.”
Then, click “Create Account” and give your account a name. This can either be your company’s name, or your own name.
Then, you’ll need to create a container. Use your domain name to keep things simple.
Once you’ve created your account and container, you’ll see the Experiments view. This is where all of your future experiments will be kept.
To launch your first experiment, click blue button on the top right. Then, you’ll see a list of options.
According to this checklist, your next step is creating an experiment. If you want to start testing as quickly as possible, this makes sense.
But for the sake of having your account fully set up before you start launching tests, I recommend linking your Google Analytics account first. So for now, skip over step two, and click “Link to Google Analytics.”
Then, click “Link Property.”
Make sure you’re logged into the same Google account you use for Google Analytics, then select the property you want to link from the drop-down list and the view you want to link.
Then, you’ll be prompted to add an Optimize snippet to your site. This snippet is what allows Optimize to implement your variations and collect your results.
This step is essential, but the way you do it depends on how you currently have your Google Analytics tracking set up.
If you manually entered your Google Analytics tracking code into your header (or another part of your site), all you need to do is copy and paste a new line of code into it.
Optimize will show the exact line you need, as well as where it should go in your Google Analytics tracking code.
Copy the line that Optimize provides in step two on this screen, then paste it exactly where it’s shown in your existing code in step three.
This is the most common way to install Google Analytics, and adding the additional line is simple.
If you installed Google Analytics tracking with a WordPress plugin like MonsterInsights, though, you’ll need to enable Optimize in that plugin.
Navigate to the plugin’s settings, then find the section for Google Optimize.
From there, you’ll be prompted to enter your new tracking information.
Unfortunately, enabling Optimize is a paid feature for some tracking plugins. If this is the case for the plugin you’re currently using, you can either upgrade to the paid version, or switch to a manual installation of Google Analytics.
Finally, if you prefer to use Google Tag Manager, you can follow Google’s instructions for configuring the Optimize tag.
After you’ve deployed Optimize, you’ll be prompted to minimize page flickering.
This is an optional step, but can greatly improve user experience as you run your tests.
Page flickering is when a visitor quickly sees the original page before the variant appears. So for example, if your homepage header is currently blue, and you want to test a green variation, it might look like this when the page loads for a visitor:
This can be annoying to visitors — and it’s easily preventable.
If you manually added the Optimize snippet to your site, simply copy the provided code from Optimize, and paste it just before your Google Analytics tracking code.
And if you used a plugin to deploy Optimize, enabling this feature is typically as simple as checking a box.
This quick step may seem insignificant, but eliminating flicker provides a better user experience and helps you maintain a seamless feel throughout your site.
3. Create an Experiment
Next, it’s time to create your first experiment!
Click the “Create Experiment” button, enter the URL of the page you want to test, and select the type of experiment you want to run.
Here, you’ll have the option to run either an A/B test, a multivariate test, or a redirect test.
The first option is an A/B test, which involves changing one element on each page. This makes it easy to identify exactly what causes your results.
A multivariate test has the same general idea as an A/B test, but involves changing multiple sections on a page.
This can be helpful if you want to test drastically different versions of a page, but also makes it more difficult to identify which elements are responsible for how your visitors respond.
Finally, a redirect test involves testing two entirely separate pages against one another.
These are all useful options for gaining insight into your site’s performance. But in this article, I’ll be sticking with an A/B test — and if you’re new to CRO testing, I recommend you do the same.
Once you select the A/B test option, you’ll see your original page listed as getting 100 percent of the URL’s traffic.
Click “Create Variant” to alter that page for your first variation.
In order to do this, you’ll be prompted to install Google’s Optimize extension for Chrome.
Once you complete the installation process, the page you want to test will open with Google Optimize’s visual editor.
From here, editing your variants is fairly simple. Click on the element you want to alter, click “Edit Element,” and make the changes you want.
For example, if you wanted to test a different banner image on your homepage, you’d select that image, click, “Edit Element,” then “Edit HTML.”
Then, swap out the current image’s URL for the URL of your variant image.
You can use the same process to edit your copy, calls to action, button colors, and virtually anything else you want to test.
Once you’ve completed the variant, save your changes and return to Google Optimize. Repeat this process for any other variations you want to test.
Then, you’ll need to set your objectives for the test. Scroll down to the Configuration section and select “ Add Experiment Objective.”
You can either create a custom objective, or select one from a pre-set list.
By default, your goal options include reducing bounces, increasing pageviews, and increasing session duration. Any custom goals you’ve set in Google Analytics will also appear in this list.
If you’ve already set up goals for your most important conversions, like email signups or form submissions, these are typically the best options to use since they have a more direct impact on your success than bounces and pageviews.
And if you haven’t yet set up goals in Google Analytics, go ahead and launch your first test using one of the pre-set goal options. But make it a priority to set up custom goals as soon as possible, and you’ll get much more actionable data from your tests.
In the standard version, you can set one primary objective and two secondary objectives for each test you run. For most people, this is plenty.
After you’ve finished setting up your goals, switch to the Targeting tab to determine which of your visitors you want to test your variations on. This will control how many (and which) of your visitors will see one of your test pages as opposed to your original page.
The default settings target all of your visitors, with an even split between each of your variations.
For quick results, you may want to include a high percentage of visitors in the experiment. However, if your experiment is rather drastic or risky, and you’re worried that it might negatively impact your results, it’s best to run your test with a small portion of your traffic only.
You can change these percentages by clicking the “Edit” button next to the “Weighting of visitors to target” section.
This is all of the information that Optimize needs to start running a basic test.
But if you want more control over which of your visitors see your variants, you can also use advanced targeting rules to run your tests only with specific segments of your audience.
Optimize 360 users can also use the audience targeting feature to focus on specific audiences they’ve already created in Google Analytics.
Once you’ve finished creating your audience, select “Start Experiment” to launch your test.
Congrats! Your first Google Optimize A/B test is up and running.
Let it do just that for awhile — then come back to step four to see your results.
4. Analyze Your Results
Once your test has adequate time to accumulate data (we’ll discuss what counts as “adequate time” in the next section), navigate to the Reporting tab in Optimize to see your results. Your results will be broken down into “cards” with data about how your variants performed.
First, you’ll see the Summary Card. This is exactly what it sounds like: An overview of your test and its results, based on your primary objective.
In the improvement column, you’ll see the difference in the modeled conversion rate between your winning variant and your original page for your primary objective.
It’s important to note that this number doesn’t necessarily reflect the actual conversion rates seen during the test, but a hypothesis based on the data collected.
Next, the Probability to Be Best column shows the probability your winning variant consistently outperforms all others. The higher this number, the more confident you can be in your changes.
Finally, the Probability to Beat Baseline column shows probability that a variant will get better results for a target objective than the original version. This percentage starts at 50 percent, to account for chance, and increases (or decreases) as both pages accumulate results.
At the bottom of your summary card, you’ll see an experiment sessions graph that shows any session where the experiment executed. This includes sessions during which the visitor did not see the experiment.
Subsequent sessions are also included in experiment sessions to show conversions that occur after a visitor is included.
The next card in the reporting tab is the Improvement overview. This report gets into more detail regarding each variant’s performance for your target objectives.
It also gives a simple, at-a-glance look at your results with color-coded metrics: Green values represent significant improvement, while red values represent significant decreases in performance.
From here, you can sort your results by objective to see where your variants stack up.
This is where it becomes extremely important to have a clear idea of what you want to accomplish. When your variants have different levels of improvement for different metrics, it can be difficult to identify the best one.
But when you focus on your primary objective, it’s easy to pick a winner. Then, you can use the other information in the report to gain additional insight into your audience’s preferences and behavior.
The third card of results contains your Objective detail report, which shows each variant’s performance for a specific objective.
Select the objective you want to focus on from the drop-down menu in the top left, then select the variants you want to compare. The chart at the bottom of the card will show each variant’s performance over the course of your experiment.
The colored sections in the graph show the performance range that each variant is likely to achieve 95 percent of the time, and the line in the middle of each range shows its median value.
In most cases, you’ll notice that the intervals in the graph narrow over time. This is because at the start of any experiment, there’s greater uncertainty of each variant’s performance.
When only three visitors have seen a variant, for example, each visitor’s response has a much greater impact. So the more data you collect, the more accurate a picture you’ll get of each variant’s performance.
As these intervals narrow and your results become clearer, you can also use your data to make more accurate predictions of how each variant will perform in the future.
Beyond these reports in Optimize, you can dig deeper into your results with Google Analytics. Each visit from Optimize is sent to Google Analytics with an experiment name, ID, and variant number.
This gives you more insight into how visitors who see certain variations behave on your site beyond your initial objectives.
8 Tips to Help You Make the Most of Google Optimize
Google Optimize can be a powerful tool for analyzing user behavior and improving your site’s conversion rates.
But in order to get valuable results, you’ll need to run your tests efficiently and effectively.
Some of this simply comes through experience with Google A/B testing software — but you can jump-start the process by following these eight tips.
1. Let your tests run long enough to collect significant results
First, it’s essential to let each of your experiments run for a long enough period of time to collect a sufficient amount of data.
You may be in a rush to launch a winning variation as quickly as possible. After all, this seems like the fastest way to boost your conversion rates and start seeing the results you want.
But as tempting as it is to start implementing changes as soon as the results start rolling in, it’s important to let your tests collect statistically significant results.
Once you launch a test, you can start to see data almost immediately.
On this screen, Google recommends running experiments for at least two weeks. But this guideline should be seen as an absolute minimum, since the longer you run a test, the more accurate results you’ll see.
It’s common to see quick increases in conversion rates shortly after launching a new test.
This is because with a small sample size, even one or two conversions can look like they make a major impact.
For example, the following graph shows conversions on two versions of a landing page, where the gray line represents the original page and the blue line represents the variant.
In those first few days, it looks like the new variant is drastically outperforming the original page.
But as the test continued to collect data, the conversion rates converged until they averaged out to be almost exactly the same.
This means that making changes based on those first few days’ results would be a waste of time.
Unfortunately, this phenomenon is extremely common — and many site owners make the mistake of implementing changes based on those early results.
In a similar example, ConversionXL ran an A/B test for an eCommerce client for a total of 35 days. During that time, they collected almost 3,000 transactions per variation.
Here’s what their results looked like:
In the first few days, Variation 3 was winning by a lot. In terms of sales, it was generating about $16 per visitor, while the control version was only earning $12.5 per visitor.
Even after a week, this was still the case. But by week two, that variant was passed by Variant 4 — and by the end of the month, there was almost no difference between any of them.
So, again: Calling this test too early would’ve driven this site owner to make a change that didn’t ultimately improve their conversions.
Of course, this isn’t always the case — but it’s a common enough scenario that it’s worth getting into the habit of letting your tests collect a significant amount of data before you take action.
The smaller your sample size, the more power each visitor has to impact your results.
For example, let’s say you run a test with two variants and each one gets 50 visitors. If your traffic numbers are normally relatively low, this might seem like a decent sample size.
So you look at your results: five visitors converted on Version A, and seven converted on Version B.
This means that Version A had a 10 percent conversion rate, while Version B achieved a 14 percent conversion rate.
So if Version A was your original page, implementing Version B would result in 40 percent more conversions!
Except that it probably wouldn’t.
Because this statistic was determined by just two visitors.
So it’s important to remember that when you first launch a test, even one visitor who behaves irregularly can skew your data.
This can lead you to draw inaccurate conclusions about your tests.
For example, in one ConversionXL experiment, they saw the following results two days after launching the test.
The first variation was losing to the control version of the page by 89 percent, with a 0 percent chance of beating the original.
This was based on a sample size of just over 200 people, which some site owners might consider a large sample. And if ConversionXL had been satisfied with that number, these results would have indicated that they should scrap the variant and stick with the original.
But 10 days later, with an additional thousand visitors in the sample, that variation was winning with 95 percent confidence.
Of course, not all of your tests will show such drastic changes after the first few visitors.
But what’s important to remember is that they could — and when it comes to conversion rate optimization, effectiveness is much more important than speed.
It’s also important to consider the role of chance in your results.
In this example, one version of a page generates a 31.3 percent conversion rate, while another variant generates a conversion rate of 40.7 percent.
Does that mean that those additional conversions can all be attributed to the differences on the page, like new buttons and bold copy?
With a relatively small sample size of 236 visitors, there could be tons of other factors at play. Maybe more of the visitors who saw the second variant were already planning to convert. Or maybe more of them were simply in a better mood.
Either way, this sample size isn’t enough to warrant making a permanent page update.
So how do you know when to stop?
Answers to this question vary by who you ask, but ConversionXL recommends that you run your tests until
- They’ve had at least three weeks to collect data (but preferably four)
- You’ve reached your pre-calculated sample size
- You’ve achieved statistical significance of at least 95 percent
It’s important to note that they recommend you run your tests until all of these conditions are met — not just until whichever comes first.
And still, many experienced site testers have their own ways of determining whether a test is complete.
Ton Wesseling, founder of Testing Agency, for example, recommends that site owners test for “one purchase cycle”:
More traffic means you have a higher chance of recognizing your winner on the significance level you’re testing on! … Small changes can make a big impact, but big impacts don’t happen too often – … so you need much data to be able to notice a significant winner.
BUT – if your test lasts and lasts, people tend to delete their cookies, … so when they return in your test, they can end up in the wrong variation. So, when the weeks pass, your samples pollute more and more… and will end up having the same conversion rates. Test for a maximum of 4 weeks.
But Peep Laja, Principal at CXL Institute, says he would “not believe any test that has less than 250 to 400 conversions per variation.”
So what should you do if you’ve run your test for four weeks, but still haven’t hit your target sample size?
In general, it’s best to collect more data than less — so continue running your test.
And make sure to collect sufficient data before calculating statistical significance.
If you’re unfamiliar with the term, it means that you’ve achieved results that can’t be attributed to chance.
Factors like sampling errors and probability can throw off your results. Achieving statistical significance means (in theory) that you can be confident in the fact that your data isn’t being skewed by those factors.
Determining whether you can attribute improvements to specific changes reliably can be challenging, and calculating statistical significance is meant to help testers know for sure.
In the previous version of Google Analytics A/B testing tool, Content Experiments, you could fix the “confidence threshold” to determine the minimum confidence level that must be achieved before a winner could be declared.
The higher the threshold, the more confident you could be that the winning page would outperform other variations in the future.
Unfortunately, this feature didn’t carry over to Optimize. So you’ll need to do a little more work to determine if and when you can be confident in an experiment’s results.
You can start by deciding when to analyze a test’s results before you even launch it.
Some site owners do that by selecting a sample size or number of conversions, then stopping their test as soon as they hit that number. And while this can be a good way to make sure you don’t end your tests too soon, that target sample size is ultimately just a guess.
Instead, you can use tools like a sample size calculator to determine an effective sample size goal.
Start by entering the conversion rate of your existing page as a baseline, then set the minimum improvement you want to see. This will tell you exactly how many visitors you should aim to have for each variation before drawing any conclusions.
It’s important to note, though, that this calculator is based on the idea that the higher your expected uplift, the smaller sample size you need. So if you reach your target sample size but don’t see that target uplift, your results are not statistically significant.
Once you’ve hit your target sample size, you can use tools like this Kissmetrics A/B significance test to determine whether your results have statistical significance.
Enter the number of visitors and conversions for each variation, and you’ll see a simple report with your results.
If you see a confidence rate of at least 95 percent, you can be fairly sure that your results were not due to chance — as long as you hit your target sample size before running this calculation.
To illustrate why this is so important, I’ll refer back to that original ConversionXL experiment that showed the new variant losing to the original page by 89 percent.
The entire point of calculating statistical significance is to arrive at definitive conclusions. So entering that data into a statistical significance calculator would’ve shown that there was a margin of error, right?
Here’s what the results looked like when plugged into a significance calculator:
This shows an improvement of over 800 percent, with 100 percent certainty.
Of course, we already know that the “100 percent certainty” part is wrong — but without that knowledge, these results would look pretty convincing.
In fact, some site owners would stop the test right here. They’d implement the winning variation, and maybe even use their new “insight” to change dozens of other pages on their site.
This would not only not lead to a significant improvement in conversion rates, but would also be a huge waste of time.
So before you calculate statistical significance, make sure you’ve collected enough data that your results can actually be significant.
And as a caveat, even if you do reach a large sample size and statistically significant results within a few days, it’s important to run your tests for at least a full week.
In fact, it’s best to run your tests in intervals of full weeks. So, for example, if your test starts on a Friday, end it on a Thursday.
It’s extremely common to see fluctuation in traffic and user behavior over different days of the week. This is especially true when comparing weekdays to weekends, but it’s often unpredictable.
For example, in this report, Thursday’s 4.26 percent conversion rate is drastically higher than Saturday’s 2.43 percent.
If this site owner had run a test from Sunday through Wednesday, their results would be very different than if they’d run it Tuesday through Saturday.
These variations can be attributed to any number of reasons.
For example, if you run a restaurant, someone might visit your site during the week as they’re planning their weekend. They’ll check out your menu and hours, then leave.
Then, they might return to your site the following Saturday to make online reservations.
If several of your visitors follow this same process, this could result in a huge difference in conversion rates — and lead you to believe that your tests aren’t producing the results you want.
So as you run your A/B tests, it’s important to be patient. Accumulate a large enough sample size, aim to achieve statistical significance, and be careful not to collect skewed data.
Claire Peña, Growth Marketing Manager at Splunk warns, “Data analysis and testing is empowering because it takes out bias, which is natural to human nature. But if you rush the tests with too low volume, it’s all for nothing.”
Letting your tests just run requires patience, but is more than worth it when you’re able to make changes that have a real impact on your business.
2. Test your navigation
Your navigation is one of the most important parts of your site. It plays a major role in how visitors make their way through your pages and interact with your content.
So even though you might think that your navigation is intuitive and user-friendly, it’s a good idea to run some tests to make sure that’s really the case.
Even minor issues can throw your visitors off and prevent them from moving through your site, which can negatively impact your conversion rates in a big way.
Within your navigation, there are several different items you can test. But one element that can make a huge difference is the copy you use in your menus. Your visitors should know exactly what to expect when they click a link.
If they don’t, they could be confused when they land on a page that isn’t what they were expecting. And even worse, they may never find the correct page for their needs, simply because the menu copy didn’t include what they were looking for.
For example, online form builder Formstack used to have an item titled “Why Use Us” in their main navigation. Then, they created a variation that linked to the same page, but with the copy “How it Works.”
This seems like a relatively inconsequential change, right?
Changing these three small words increased page traffic by 50 percent and led to 8 percent more free trial signups.
Adopting the same language that your audience uses can go a long way in helping them find the content that they want.
You can also try adjusting your navigation to reflect the way that people look for information.
For example, Bizztravel Wintersport specializes in skiing holiday packages in the Alps. After digging into their Google Analytics data, they found that visitors most often use the site search feature to look for ski village names.
They also found that only 23 percent of their visitors arrived on their homepage, and the rest were pushed to navigate by country, then by region, then by village. This meant that it was taking an average of five clicks before someone arrived at the ski village they were looking for.
It was clear that the navigation system did not reflect the way visitors preferred to find information for planning a vacation. Their original navigation and region page looked like this:
So even if a visitor arrived on the site knowing exactly which ski village they wanted to visit, the navigation forced them to first select a region. If they didn’t know the region, they’d need to guess — then repeat the process until they guessed correctly.
To address this issue, the company restructured their navigation menu to include direct links to ski villages, as well as ski village recommendations and country flag buttons that let visitors browse villages by country.
They also removed links with general company information from the main navigation menu. The new menu looked like this:
As a result, they achieved a 21.34 percent increase in goal completions with a test result confidence level of 97 percent.
The new navigation helped visitors find exactly what they were looking for, because it was designed with their browsing habits in mind.
So if you’re looking for an impactful way to improve your site’s performance, your navigation could be a great place to focus your efforts.
And to take things a step further, you can also consider the actions a visitor takes beyond their initial menu selection. In fact, you can use A/B testing to improve your entire user flow.
For example, a basic user flow typically looks like this:
Essentially, your user flow describes how you want visitors to move through your pages. Most sites have multiple goal flows, with each leading to a specific conversion.
In the example above, this hypothetical site owner has come up with three possible paths a visitor could take on their site. Although there’s some overlap between certain steps, they’re all designed to function independently of one another.
But in some cases, visitors need to make it through multiple flows before they make the main conversion.
This is called a stacked user flow, and is particularly important for companies selling high-value products.
After all, you can’t expect a visitor to spend thousands of dollars during their first interaction with your brand — so a simple, three-step process resulting in a sale is unrealistic.
But it’s not unrealistic to expect a first-time visitor to sign up for your email list. And after receiving several relevant emails, that visitor will be more likely to trust your company enough to make a purchase.
Of course, these are only two examples of possible user flows. Your company’s ideal flow depends on your industry, business model, and audience.
So in order to design an effective model, you need to know what problems your visitors want to solve, what they need in order to solve those problems, which features of your products are most important to them, and what their doubts and hesitations are about purchasing.
Then, you can design your flow to answer all of their questions and highlight the ways that your company can help them reach their goals. But after you’re satisfied with what you’ve created, it’s still mostly hypothetical.
Just because you want your visitors to follow a specific path to conversion doesn’t mean they will.
Fortunately, once you’ve created your ideal path, you can use A/B tests to drive your visitors to follow it.
The method you use to do this is the same as what you’d do to run any other A/B test on your site. But instead of focusing on driving high-level conversions, you’ll look for ways to guide visitors to the next step in your user flow.
So, for example, part of your ideal flow might involve moving visitors from a landing page to a specific product page. To make this happen, you could run an A/B test on that landing page with the goal of increasing clicks on a link to that product.Then, you could create similar tests for the other important pages in your user flow.
Many of the goals you optimize for in these tests may seem insignificant. For example, increasing clicks to a specific informational page won’t show the immediate revenue increase that increasing clicks on a conversion button can.
But these small changes can help you move visitors through your site in a way that helps them evolve from first-time visitors to qualified leads to valuable customers.
And if you’re looking to attract long-term clients for your business, this might be even better than an immediate increase in conversions.
3. Utilize secondary objectives
Though your primary objective may be the reason you’re running the A/B test in the first place, setting secondary objectives will help you gain even more actionable insight.
For example, if you run an ecommerce site, your primary goal with a test might be to get visitors to add a specific product to their cart. In order to achieve that, you could alter the location of different buttons and calls to action on the page.
And although you’ll focus on the changes in conversion rate for that primary goal, it’s unlikely that this will be the only metric that’s affected by your changes.
Take advantage of your ability to add multiple secondary objectives in Google Optimize to keep track of these additional impacts. If you already have other custom goals set up, this is your best option.
You can use this feature to monitor smaller conversions like loyalty program signups or product feature views.
If you don’t yet have many conversion goals set up for your site, you can also use the default objectives like bounce rate and pageviews. There’s no reason not to collect this data when you have the ability to — and looking over it can help you get a more well-rounded understanding of the impact each change makes.
4.Test Before Launching a Redesign
There are many reasons you might decide to completely redesign your site.
Maybe your company is rebranding. Maybe usability issues are impacting how visitors engage with your content. Or maybe it’s simply been a few years since you first launched your site, and your design now feels dated.
These are all valid reasons.
Unfortunately, no matter how solid your reasoning, launching a redesign can wreak havoc on your conversion rates.
As an extreme example, take a look at what happened to Digg after launching a redesigned site. Although this is an old case study, it’s still useful as a cautionary tale.
As one of the original social bookmarking sites, Digg was an extremely popular site for sharing and discussing online content.
But as social media sites like Facebook and Twitter gained in popularity, Digg decided to launch a redesign with a focus on social networking. This redesign was designed to highlight content shared by a follower’s friends, instead of content that received the most upvotes from strangers.
It’s fairly safe to assume that Digg did not test this new design with their followers — because in the month following the launch of their redesign, traffic fell 26 percent.
To be fair, this was a more drastic change than you might be planning to make with your site.
Still, even making minor changes to elements like your navigation can have a major impact.
People who visit your site often are used to your current setup. They know exactly what to click in your navigation, where to scroll on a page to find the information they need, and how to perform key conversions.
No matter how great your new design, changing things up will throw these returning visitors off.
But if you focus on usability from the start, and test your variations before choosing a final design, you can select the version that’s most intuitive and user-friendly.
5. Link Your Optimize Account with AdWords
One of the biggest advantages of running A/B tests with a Google-owned tool is how easy it is to integrate data from other Google properties.
If you run PPC campaigns with AdWords, you can link your accounts to improve your targeting and run tests tailored to specific campaigns.
Though this can help you run more effective tests in a variety of ways, its most valuable use is for improving landing pages. Each time a visitor clicks one of your PPC ads, you spend part of your advertising budget to bring them to your site.
That means you need the landing page they arrive on to be as effective as possible at driving conversions.
The easiest way to see how effective your landing pages currently are is with the “Landing Pages” page in AdWords. This report shows performance metrics like clicks and conversions for each landing page, as well as whether each one is mobile-friendly.
Use this report to identify which landing pages on your site are generating the conversion rates you want — and which aren’t.
This alone might help you uncover potential changes. For example, if your landing pages with images outperform those without across the board, that could be an easy fix.
But in most cases, it’s not that simple.
People who arrive on your site from PPC campaigns are typically looking for specific information, and a page that’s designed to meet their needs.
But if you’ve ever attempted to create campaign-specific landing pages, you know that figuring out exactly what people want is challenging.
That’s where the AdWords-Optimize integration is extremely helpful. You can create landing page variations, then test them based on keywords, ad groups, and campaigns.
For example, let’s say you run a hotel and you want to generate more conversions from your ad set targeting the keyword “family-friendly hotels.”
You can use Google Optimize to create a variation of your landing page that replaces your standard hotel photo with a photo of a family at your hotel’s pool. Then, you can test that variation against your original page with visitors who arrive on your site from that campaign.
If this variation generates more conversions, you could implement it permanently as the landing page for that campaign. Then, you can continue testing variations of that page with new copy and forms to find out exactly what appeals to that subset of your audience.
6. Test Both Micro and Macro Conversions
Not every page on your site is designed to generate sales. And even on the pages that are, not all of your visitors will be ready to convert during their first visit.
So if you’re only focusing on those major conversions, it may look like your tests are failing. Fortunately, that may not be the case, and you can get a more nuanced view of your visitors’ behavior by including both micro and macro conversions in your tests.
If you’re unfamiliar with the terms, macro conversions are the most important actions a visitor can take on your site. If you run an ecommerce store, for example, that would be making a purchase. If you run a non-profit, it would be making a donation.
But before most of your visitors are ready to take those larger actions, they’ll take a series of smaller steps, like reading reviews, expanding product details, and adding items to their carts.
These are all micro conversions.
When set up correctly, the micro conversions a visitor can make on your site should get them closer to performing a macro conversion.
For example, a visitor isn’t going to purchase a high-value product without a full understanding of its features and confidence in its quality.
Amazon does a great job of presenting all of this information in a straightforward, user-friendly way.
When someone lands on a product page, they aren’t forced to buy the product immediately. They can if they want to — but they can also read reviews, watch videos, browse answers to common questions, and learn about special sales and promotions.
Each of these actions will get them closer to being ready to buy the product — or at least they should.
If you’re unsure whether your micro conversions are effective in driving visitors to make macro conversions, A/B tests can be a great way to get a concrete answer.
Test variants with different versions of a micro conversion’s calls to action and buttons and set that micro conversion as the primary objective. Then, set the macro conversions on your site as secondary objectives.
If you see a strong correlation, this is a good sign that your sales funnel is working as it should.
If you don’t, this is a sign that you may need to re-think your micro conversions.
Work backward from your main conversion goals, and figure out what people need to do before they take those steps. Then, look for ways to encourage those smaller actions.
Create variants, launch tests, and see which elements are most effective at generating the micro conversions that drive macro conversions for your business.
In many cases, these will be small changes — but they can add up quickly and have a positive impact in driving the results you want.
7. Consider How Many Conversion Goals You Should Have
In the previous tip, I mentioned the importance of having several micro conversion goals throughout your site.
I stand by that.
But it’s also important to make sure you don’t have too many conversion goals vying for visitors’ attention.
The more calls to action you have on a page, the more options each visitor has to weigh. And in many cases, too many choices can lead to the visitor making no choice at all.
You can eliminate that indecision by consolidating similar conversions. For example, when Whirlpool wanted to increase click-throughs from its email campaigns, they consolidated four CTAs into one.
As a result, they improved their click-through rate by 42 percent.
Along those same lines, retailer nameOn wanted to redesign their checkout page to reduce cart abandonment.
Their original page featured several unnecessary details like an email opt-in form and buttons linking to product pages.
They simplified the page by removing all buttons except for one that showed more information about a welcome bonus, and one that let the visitor “Continue to checkout.”
As a result, they increased their amount of completed checkouts by 11.4%.
In this case, less was definitely more — and that’s a principle that holds true across many aspects of conversion rate optimization.
In another example, SEO company TheHOTH wanted to maximize conversions on their homepage.
The original version featured a video, a signup form, customer logos, client testimonials, and everything else you’d expect from a reputable agency.
And while they were earning a steady flow of traffic, not much of that traffic was translating to conversions or sales. So they decided to create an extremely minimal variant with nothing but the signup form.
This extremely basic design drove their conversion rate from 1.39% to 13.13%. You might need to re-read those numbers — but they’re not a typo.
In this case, eliminating all distractions pushed visitors to take the only action they could.
At Crazy Egg, we’ve run A/B tests on our homepage for more than ten years to make sure we’re encouraging the maximum amount of free trial signups. We’ve consistently found that cutting down the copy to hone in on just one CTA has performed the best in terms of conversion rates.
As you can see, our homepage has no header and you can only see additional information about our website optimization tools if you click the small “Not ready to get started?” link under the Show Me My Heatmap button.
So if you’re looking for a way to increase conversions on a high-value page, achieving your target conversion rate could be as simple as eliminating the other elements on that page.
8. Use Customer Insight to Design Your Tests
A/B testing can help you uncover which variations of a page get the best responses from your target audience. But creating those variations in the first place shouldn’t involve guesswork.
Sure, you might base simple choices like button colors on aesthetic preferences — but beyond that, you should aim to create your variants based on what your customers actually want.
For example, when Groove wanted to boost conversions on a landing page, they started with an extensive customer survey. They talked with their existing customers to get a better idea of the language they used when describing Groove’s services.
Then, they set up an autoresponder email to new customers asking them why they decided to sign up.
When they designed the new landing page, they incorporated their findings into the copy. So instead of explaining their product from the company’s standpoint, they used language that illustrated its value according to actual customers.
They no longer described their service as “SaaS and eCommerce Customer Support,” but as “Everything you need to deliver awesome, personal support to every customer.”
As a result, their conversion rate increased from 2.3 percent to 4.3 percent.
It’s important to remember that your customers’ priorities may be slightly different from your own. So even if you offer exactly what they need, it might not come across in your copy.
Spend some time talking to your customers and figuring out what really matters to them, then feature that information prominently on your conversion-focused pages.
Beyond copy, you can also take user experience into account by looking into your visitors’ on-site behavior.
For example, when Nurse.com wanted to improve their conversion rate on a landing page about continuing education, they ran a heatmap test to see where people were clicking.
Their main conversion buttons were receiving a decent chunk of the clicks.
Unfortunately, less-important links were getting more clicks, and some visitors were clicking on non-clickable elements, as well.
This showed that the page wasn’t as effective as it could be at driving visitors to make that main conversion.
To address this issue, they created a variation that eliminated the distracting links, consolidated the two main conversions into one button, and moving the most important information to a page that was already getting a lot of attention.
As a result, visitors were directed straight to the most important conversion — and the variant generated 15.7 percent more sales.
If you want to improve a specific page on your site but aren’t sure where to start, heatmaps are an effective way to identify what’s preventing visitors from converting.
When you use this information to shape your variants, you can be more confident from the start that you’ll see noticeable increases in your conversion rate.
You can also use these kinds of tests to identify areas where your visitors are getting stuck.
Some of your visitors might be unsure of whether they want to buy from you. These visitors will need to spend some time browsing your informational pages and learning more about your company.
They might sign up for your email list, return to your site a few times to check out your blog, and engage with you on a social media platform before making a decision.
Other visitors will be ready to purchase almost immediately — and you need to make sure that they can.
To you, your site’s calls to action might be obvious. You know exactly where you’d need to click to make a purchase, and you assume that this is clear for everyone.
But after spending enough time on a site, it’s easy to become unaware of usability issues that could be preventing visitors who aren’t familiar with it from taking action.
These issues could include unclear calls to action, broken links, and even non-clickable elements that your visitors think will take them to a conversion page.
Essentially, you should treat anything that makes the process of converting even slightly more difficult like a usability issue.
And with user analysis tools, you can determine exactly what those issues are.
For example, the Crazy Egg Overlay report shows a breakdown of where all of the clicks on your site are going by element — including clicks on elements that aren’t clickable, like the headshot of Tim Ferriss shown below.
So if you notice that a large portion of your clicks are going to an element that doesn’t help your visitors convert, this is a great starting point for an A/B test.
Try moving your important calls to action to the areas that people are already clicking, or consider linking those non-clickable elements to relevant conversion pages.
When you use data to guide your variants from the start, you can focus on making effective changes that improve your site’s user experience.
Google Optimize offers features that are comparative to well-regarded A/B testing tools available today, but at a much lower price tag: Free.
This alone sets it apart as an excellent option for most site owners.
The platform makes it easy to configure and launch tests. So if you’re new to A/B testing, it’s a user-friendly choice that can help you start making data-backed improvements to your site.
Latest posts by Today's Eggspert (see all)
- Ready For Rebranding? Validate Your New Direction To Minimize Risk - March 11, 2019
- A 4-Point CRO Checklist to Give to Your Web Designers - January 4, 2019
- Differences Between Lead Gen, eCommerce, & SaaS Landing Pages - November 23, 2018