Your Step-by-Step Guide to A/B Testing with Google Optimize

by Today's Eggspert

Last updated on August 16th, 2018

A/B testing can be as simple as reciting the alphabet.

You design two versions of a web page (A & B), divide the traffic between the two, then choose the one that gives you the maximum conversions.

Simple, right?

Not necessarily.

Most users struggle with their first few A/B tests. They don’t know which tools to use, how to set up a test, or how to know when it’s done.

If you fall into this category, you’re certainly not alone.

Fortunately, you’re in the right place.

In this article, I’ll show you a free A/B testing tool that’s readily available to every website owner. I’ll walk you through the process of creating your account, launching your first test, and monitoring your results.

I’ll also give you some guidance designing effective tests, as well as other insight to help you make the most of your efforts to optimize your site for conversions.

Sound good?

Great. Let’s get started.

Why should you run A/B testing?

Before we get into how to A/B test in digital marketing, it’s important to have a clear idea of the why.

To be clear: Every site owner should be running A/B testings.

But why are they so important?

The simplest answer is that they’re an effective way to identify possible improvements that can make a major impact on your online success. Plus, they can be pretty simple to set up and learn from.

For example, a basic A/B test looks something like this:

basic ab testing example

You create two variations of a page, divide your traffic between the two, and see which is more effective in generating a target conversion.

Then, you use that insight to make the elements on the more effective variation permanent.

The general idea is that by running A/B tests, you’ll determine how to create a page that gets the best results for a specific goal. Then, you can then publish this as the page you want viewed by all website visitors.

Simple. But does it work?

The answer to that question depends on how your set up your tests, but it certainly has the potential to.

In one example, Kapitall increased conversions by 44% by using a Google Analytics content experiment. With the right goals and approach, A/B testing can have a huge impact on your ability to generate sales and other important conversions.

This is because when you run A/B tests, you have the ability to find out what works for your audience.

If you’ve spent any time browsing digital marketing blogs, you’ve likely seen dozens — if not hundreds — of “best practices” for increasing conversion rates.

Many of these are backed by multiple studies, with site owners who’ve seen success with them raving about their effectiveness. And in some cases, implementing the same changes on your own site could result in more conversions.

But it isn’t guaranteed.

Your audience is likely made up of an entirely different group of people than most of the audiences that those tests have been run with. So their findings may not apply at all.

For example, adding social proof to forms is generally considered best practice. The idea is that statistics showing your existing number of subscribers, your customer satisfaction rates, or a certain result your clients have seen can give visitors confidence in your brand.

So when presented with the following options for an email opt-in form, the majority of marketers expected the first variation to perform better.

opt in form ab testing

But after running an A/B test using both, version B produced a 122% increase in signups.

It’s impossible to know exactly why this was the case. Maybe users were distracted by the extra line of text. Or maybe the promise of “free updates” simply wasn’t compelling.

But that doesn’t matter.

The real takeaway is that with this specific audience, the form without social proof was more effective. This is the kind of insight you can only gain with A/B tests.

Plus, you aren’t just limited to forms and other conversion-specific elements. You can also use A/B tests to determine which design elements your visitors prefer, or which layouts encourage them to spend more time on your site.

For example, in one test, online newspaper McClatchy moved its story photos from the left of the copy to the right.

story photos ab testing

This seems like a relatively minor change. After all, none of the elements are any different in the variation — they’re simply organized differently on the page.

But the version with right-aligned images saw 20% more clicks, 10% more traffic, and a higher overall average session duration.

You can also take things further by removing certain elements on your page to see how they impact the way that users navigate your pages.

If you think that an element is distracting your visitors from your main calls to action, you can create a variant of the page without that element and see what happens.

For example, when Yuppiechef wanted to increase conversions on a landing page for their online registry service, they removed their navigation bar from the page.

yuppiechef landing page

At first, this may sound like a bad idea.

After all, a navigation bar one of the most important elements of any site. Wouldn’t removing it confuse visitors and drive them away?

Not in this case.

In fact, removing the navigation bar resulted in a 100% increase in conversions on the page.

So as you run your tests, you may be surprised by which elements your customers prefer, and what generates the best results for your business.

But as you start to see results, you can be confident that they’re specific to your audience — which is much more helpful than any generic best practice.

Why use Google Optimize to Do A/B Tests?

There are many conversion testing tools on the market, but the best are usually paid and add to your marketing expenses.

Google Optimize is an exception to this rule.

It’s Google’s custom A/B testing and personalization tool, which launched in 2016 and is gradually replacing Content Experiments. If you’ve ever used Content Experiments, you’ll find that the interface and capabilities are relatively similar, so using Optimize is an easy switch.

And even if you haven’t yet used a Google tool for A/B testing, you’ll likely find the experience extremely easy.

Creating an account is a straightforward process. From there, so is running an A/B or split test.

In this article, we’ll focus specifically on A/B testing — but if you’re interested in running multivariate split tests, you can do that with Google Optimize, too.

Beyond its easy setup, one of the biggest advantages of Optimize is that it integrates easily with Google Analytics. Today, Google Analytics is considered a standard tool for any site owner — so being able to access test data directly in your account is extremely convenient.

As you run your tests, you can easily analyze visitors who see each variation in Analytics, since experiment KPIs tie right into your account.

Experiment Dimensions will show up as secondary dimension options in your reports, which lets you monitor on-site behavior of different users based on the variant they see.

On the flip side, you can also use data from Google Analytics to target the right users if you choose to use the paid version of the tool, Google Optimize 360.

Google Analytics Audience targeting lets you target your experiments to key user segments based on data like number of site visits, previous on-site actions, and location.

If you want to maximize conversions from a specific group within your audience, this is a great way to focus your tests on those users. You can create unique offers tailored to that group, then run tests to see how they respond.

Optimize 360 also includes up to 36 combinations for multivariate testing, up to 10 preconfigured experiment objectives, the capability to run over 100 simultaneous experiments, and implementation services.

optimize 360 ab testing

If you’re an enterprise-level company and looking to get serious with A/B testing, this could be the right option for you.

But the free version offers plenty of features for most small to mid-sized businesses. That’s why in this post, we’ll focus solely on the test options available at no charge.

Now: Let’s get started.

Here’s how to launch your first test with Google Optimize, in four easy steps.

1. Prep for your tests

There are many things you can learn from an A/B test.

You can run one to determine whether you should focus on single conversion goal or strive for multiple conversion goals. You could run another one to  identify which design elements and messaging are most persuasive for your audience.

There are tons of options, and many of them can produce helpful guidance in improving your site.

But no matter what you’re testing, remember to keep your priorities straight. The end goal of any conversion rate optimization, or CRO, process should be to increase your total revenue.

And although conversions and revenue generally correlate, sometimes they don’t.

Imagine you’ve set up an A/B test to choose the best page design for increasing your subscriber rate. You think that changing the CTA copy to emphasize a free resource will generate more subscriptions.

It works!

Your subscription rate goes through the roof — but the design somehow hurts your sales rate and results in lower revenue.

This might make you insane. But do you keep your winning design?

Absolutely not.

Always choose the page that will increase your bottom line — not just your conversions. Companies run on revenue, not on conversion rates.

The same holds true form virtually any aspect of your marketing strategy.

In this ab testing example, a company is trying to determine which keyword produces the best results for a PPC campaign.

keyword comparison ab testing

Based on those initial results, you’d likely conclude that the first keyword is a better option. After all, 50% of the users it attracts convert.

Compared to the second keyword’s 25% conversion rate, this is a significant jump — and results half the cost per conversion.

Unfortunately, not all of those conversions from the first keyword translate into sales.

keyword comparison part deux 1 ab testing

In fact, only 10% of them do.

Visitors from the second keyword, on the other hand, have a 50% sales rate. So although they’re less likely to complete a basic conversion, they’re higher quality leads overall.

Keep this in mind as you conduct your tests.

Although a huge jump in conversion rates is exciting, you should focus on earning conversions from qualified leads.

This is sometimes more difficult than it sounds.

In fact, in one study, Larry Kim found that an increase in conversion rate typically leads to a decrease in qualified leads.

conversion rate vs qualified leads ab testing

Of course, this doesn’t mean you can’t increase your sales and revenue with conversion rate optimization. If that were the case, it wouldn’t be such a popular tactic — and we wouldn’t be recommendation that you run A/B tests.

It just means that you need to have a clear idea of the kinds of conversions you want to increase, and who your target audience is.

So before you start running tests, determine exactly what it is that you hope to accomplish. This will depend largely on your overall goals.

  • For bloggers, a single subscription could be considered a conversion.
  • For an eCommerce store, a conversion might be a sale, subscription, newsletter sign-up, product carting, or even an event click.

The conversion goals you set depend on your business model, but make sure that regardless of what they are, they’re designed to help you reach your bigger-picture goals.

Identifying the conversions you want to increase will make it easier to determine what kinds of changes you should make in your variations. Plus, it will help you decide right from the start what kinds of results you want to see.

Without this information, it can be difficult to accurately identify the winning variation after you’ve finished running a test.

The clearer you are with the goal you want to achieve, the easier it will be reach that goal.

And if you’re struggling to pick just one, start by listing all of your ideas in a spreadsheet. Determine which would have the biggest impact on your business’s goals, and start there.

As you start running A/B tests regularly, this will be a helpful resource to have on hand. But for now, pick one starting point, then move on to step two.

2. Create an account with Google Optimize

The first step to running tests with Google Optimize is creating an account.

You can do this by logging into your Google Analytics account, then navigating to Behavior > Experiments.

behavior experiments

This is where Content Experiments used to be, but Google is phasing them out for Optimize. So even if you’re currently using Content Experiments, it’s a good idea to make the switch as soon as possible.

switch to optimize

Alternatively, you can navigate directly to Google Optimize and click “Sign up for Free.”

optimize signup ab testing

Then, click “Create Account” and give your account a name. This can either be your company’s name, or your own name.

create account ab testing

Then, you’ll need to create a container. Use your domain name to keep things simple.

create container

Once you’ve created your account and container, you’ll see the Experiments view. This is where all of your future experiments will be kept.

experiments view

To launch your first experiment, click the gray info icon in the top right, Then, you’ll see a checklist of tasks to complete.

onboarding sequence ab testing

According to this checklist, your next step is creating an experiment. If you want to start testing as quickly as possible, this makes sense.

But for the sake of having your account fully set up before you start launching tests, I recommend linking your Google Analytics account first. So for now, skip over step two, and click “Link to Google Analytics.”

Then, click “Link Property.”

link ga property

Make sure you’re logged into the same Google account you use for Google Analytics, then select the property you want to link from the drop-down list and the view you want to link.

link property menu

Then, you’ll be prompted to add an Optimize snippet to your site. This snippet is what allows Optimize to implement your variations and collect your results.

optimize snippet ab testingThis step is essential, but the way you do it depends on how you currently have your Google Analytics tracking set up.

If you manually entered your Google Analytics tracking code into your header (or another part of your site), all you need to do is copy and paste a new line of code into it.

Optimize will show the exact line you need, as well as where it should go in your Google Analytics tracking code.

deploy optimize

Copy the line that Optimize provides in step two on this screen, then paste it exactly where it’s shown in your existing code in step three.

This is the most common way to install Google Analytics, and adding the additional line is simple.

If you installed Google Analytics tracking with a WordPress plugin like MonsterInsights, though, you’ll need to enable Optimize in that plugin.

Navigate to the plugin’s settings, then find the section for Google Optimize.

ab testing metrics monster insights

From there, you’ll be prompted to enter your new tracking information.

Unfortunately, enabling Optimize is a paid feature for some tracking plugins. If this is the case for the plugin you’re currently using, you can either upgrade to the paid version, or switch to a manual installation of Google Analytics.

Finally, if you prefer to use Google Tag Manager, you can follow Google’s instructions for configuring the Optimize tag.

After you’ve deployed Optimize, you’ll be prompted to minimize page flickering.

minimize page flickering ab testing

This is an optional step, but can greatly improve user experience as you run your tests.

Page flickering is when a visitor quickly sees the original page before the variant appears. So for example, if your homepage header is currently blue, and you want to test a green variation, it might look like this when the page loads for a visitor:

ab test flicker ab testing

This can be annoying to visitors — and it’s easily preventable.

If you manually added the Optimize snippet to your site, simply copy the provided code from Optimize, and paste it just before your Google Analytics tracking code.

And if you used a plugin to deploy Optimize, enabling this feature is typically as simple as checking a box.

This quick step may seem insignificant, but eliminating flicker provides a better user experience and helps you maintain a seamless feel throughout your site.

3. Create an experiment

Next, it’s time to create your first experiment!

Click “Create Experiment” button, enter the URL of the page you want to test, and select the type of experiment you want to run.

name experiment ab testing

Here, you’ll have the option to run either an A/B test, a multivariate test, or a redirect test.

The first option is an A/B test, which involves changing one element on each page. This makes it easy to identify exactly what causes your results.

A multivariate test, then, has the same general idea as an A/B test, but involves changing multiple sections on a page.

This can be helpful if you want to test drastically different versions of a page, but also makes it more difficult to identify which elements are responsible for how your visitors respond.

Finally, a redirect test involves testing two entirely separate pages against one another.

These are all useful options for gaining insight into your site’s performance. But in this article, I’ll be sticking with an A/B test — and if you’re new to CRO testing, I recommend you do the same.

Once you select the A/B test option, you’ll see your original page listed as getting 100% of the URL’s traffic.

Click “Create Variant” to alter that page for your first variation.

original variant

In order to do this, you’ll be prompted to install Google’s Optimize extension for Chrome.

optimize extension

Once you complete the installation process, the page you want to test will open with Google Optimize’s visual editor.

optimize editor ab testing

From here, editing your variants is fairly simple. Click on the element you want to alter, click “Edit Element,” and make the changes you want.

For example, if you wanted to test a different banner image on your homepage, you’d select that image, click, “Edit Element,” then “Edit HTML.”

edit element

Then, swap out the current image’s URL for the URL of your variant image.

edit html

You can use the same process to edit your copy, calls to action, button colors, and virtually anything else you want to test.

Once you’ve completed the variant, save your changes and return to Google Optimize. Repeat this process for any other variations you want to test.

Then, you’ll need to set your objectives for the test. Scroll down to the Configuration section and select “ Add Experiment Objective.”

configuration

You can either create a custom objective, or select one from a pre-set list.

choose objective ab testing

By default, your goal options include reducing bounces, increasing pageviews, and increasing session duration. Any custom goals you’ve set in Google Analytics will also appear in this list.

If you’ve already set up goals for your most important conversions, like email signups or form submissions, these are typically the best options to use since they have a more direct impact on your success than bounces and pageviews.

And if you haven’t yet set up goals in Google Analytics, go ahead and launch your first test using one of the pre-set goal options. But make it a priority to set up custom goals as soon as possible, and you’ll get much more actionable data from your tests.

In the standard version, you can set one primary objective and two secondary objectives for each test you run. For most users, this is plenty.

After you’ve finished setting up your goals, switch to the Targeting tab to determine which of your visitors you want to test your variations on. This will control how many (and which) of your visitors will see one of your test pages as opposed to your original page.

targeting

The default settings target all of your visitors, with an even split between each of your variations.

For quick results, you may want to include a high percentage of visitors in the experiment. However, if your experiment is rather drastic or risky, and you’re worried that it might negatively impact your results, it’s best to only run your test with a small portion of your traffic.

You can change these percentages by clicking the “Edit” button next to the “Weighting of visitors to target” section.

weight variant

This is all of the information that Optimize needs to start running a basic test.

But if you want more control over which of your visitors see your variants, you can also use advanced targeting rules to only run your tests with specific segments of your audience.

targeting rules

The standard options let you target visitors based on AdWords data, location, browser, on-site behavior, cookies, and JavaScript variables.

Optimize 360 users can also use the audience targeting feature to focus on specific audiences they’ve already created in Google Analytics.

Once you’ve finished creating your audience, select “Start Experiment” to launch your test.

start experiment

Congrats! Your first Google Optimize A/B test is up and running.

Let it do just that for a while — then come back to step four to see your results.

4. Analyze your results

Once your test has adequate time to accumulate data, navigate to the Reporting tab in Optimize to see your results. Your results will be broken down into “cards” with data about how your variants performed.

First, you’ll see the Summary Card. This is exactly what it sounds like: An overview of your test and its results, based on your primary objective.

summary ab testing

In the improvement column, you’ll see the difference in the modeled conversion rate between your winning variant and your original page for your primary objective.

It’s important to note that this number doesn’t necessarily reflect the actual conversion rates seen during the test, but a hypothesis based on the data collected.

Next, the probability to be best column shows the probability your winning variant consistently outperforms all others. The higher this number, the more confident you can be in your changes.

Finally, the probability to beat baseline column shows probability that a variant will get better results for a target objective than the original version. This percentage starts at 50%, to account for chance, and increases (or decreases) as both pages accumulate results.

At the bottom of your summary card, you’ll see an experiment sessions graph that shows any session where the experiment executed. This includes sessions during which the user did not see the experiment.

Subsequent sessions are also included in experiment sessions to show conversions that occur after a user is included.

The next card in the reporting tab is the Improvement overview. This report gets into more detail regarding each variant’s performance for your target objectives.

It also gives a simple, at-a-glance look at your results with color-coded metrics: Green values represent significant improvement, while red values represent significant decreases in performance.

improvement overview

From here, you can sort your results by objective to see where your variants stack up.

This is where it becomes extremely important to have a clear idea of what you want to accomplish. When your variants have varying levels of improvement for different metrics, it can be difficult to identify the best one.

But when you focus on your primary objective, it’s easy to pick a winner. Then, you can use the other information in the report to gain additional insight into your audience’s preferences and behavior.

The third card of results contains your Objective detail report, which shows each variant’s performance for a specific objective.

Select the objective you want to focus on from the drop-down menu in the top left, then select the variants you want to compare. Then, the chart at the bottom of the card will show each variant’s performance over the course of your experiment.

objective detail

The colored sections in the graph show the performance range that each variant is likely to achieve 95% of the time, and the line in the middle of each range shows its median value.

In most cases, you’ll notice that the intervals in the graph narrow over time. This is because at the start of any experiment, there’s greater uncertainty of each variant’s performance.

When only three users have seen a variant, for example, each user’s response has a much greater impact. So the more data you collect, the more accurate a picture you’ll get of each variant’s performance.

As these intervals narrow and your results become clearer, you can also use your data to make more accurate predictions of how each variant will perform in the future.

Beyond these reports in Optimize, you can dig deeper into your results with Google Analytics. Each visit from Optimize is sent to Google Analytics with an experiment name, ID, and variant number.

ga optimize data ab testing

This gives you more insight into how users who see certain variations behave on your site beyond your initial objectives.

8 tips to help you make the most of Google Optimize

Google Optimize can be a powerful tool for analyzing user behavior and improving your site’s conversion rates.

But in order to get valuable results, you’ll need to run your tests efficiently and effectively.

Some of this simply comes through experience with the tool — but you can jump-start the process by following these eight tips.

1. Let your tests run long enough to collect significant results

First, it’s essential to let each of your experiments run for a long enough period of time to collect a sufficient amount of data.

You may be in a rush to launch a winning variation as quickly as possible. After all, this seems like the fastest way to boost your conversion rates and start seeing the results you want.

But as tempting as it is to start implementing changes as soon as the results start rolling in, it’s important to let your tests collect statistically significant results.

Once you launch a test, you can start to see data almost immediately.

gathering dta

On this screen, Google recommends running experiments for at least two weeks. But this guideline should be seen as an absolute minimum, since the longer you run a test, the more accurate results you’ll see.

It’s common to see quick increases in conversion rates shortly after launching a new test.

This is because with a small sample size, even one or two conversions can look like they make a major impact.

For example, the following graph shows conversions on two versions of a landing page, where the gray line represents the original page and the blue line represents the variant.

regress mean

In those first few days, it looks like the new variant is drastically outperforming the original page.

Exciting!

But as the test continued to collect data, the conversion rates converged until they averaged out to be almost exactly the same.

This means that making changes based on those first few days’ results would be a waste of time.

Unfortunately, this phenomenon is extremely common — and many site owners make the mistake of implementing changes based on those early results.

In a similar example, ConversionXL ran an A/B test for an eCommerce client for a total of 35 days. During that time, they collected almost 3,000 transactions per variation.

Here’s what their results looked like:

ecommerce test

In the first few days, Variation 3 was winning by a lot. In terms of sales, it was generating about $16 per visitor, while the control version was only earning $12.5 per visitor.

Even after a week, this was still the case. But by week two, that variant was passed by Variant 4 — and by the end of the month, there was almost no difference between any of them.

So, again: Calling this test too early would’ve driven this site owner to make a change that didn’t ultimately improve their conversions.

Of course, this isn’t always the case — but it’s a common enough scenario that it’s worth getting into the habit of letting your tests collect a significant amount of data before you take action.

The smaller your sample size, the more power each user has to impact your results.

For example, let’s say you run a test with two variants and each one gets 50 visitors. If your traffic numbers are normally relatively low, this might seem like a decent sample size.

So you look at your results: 5 users converted on Version A, and 7 converted on Version B.

This means that Version A had a 10% conversion rate, while Version B achieved a 14% conversion rate.

So if Version A was your original page, implementing Version B would result in 40% more conversions!

Except that it probably wouldn’t.

Because this statistic was determined by just two visitors.

So it’s important to remember that when you first launch a test, even one visitor who behaves irregularly can skew your data.

This can lead you to draw inaccurate conclusions about your tests.

For example, in one ConversionXL experiment, they saw the following results two days after launching the test.

conversionxl 1 ab testing

The first variation was losing to the control version of the page by 89%, with a 0% chance of beating the original.

This was based on a sample size of just over 200 people, which some site owners might consider a large sample. And if ConversionXL had been satisfied with that number, these results would have indicated that they should scrap the variant and stick with the original.

But 10 days later, with an additional thousand users in the sample, that variation was winning with 95% confidence.

conversionxl 2

Of course, not all of your tests will show such drastic changes after the first few users.

But what’s important to remember is that they could — and when it comes to conversion rate optimization, effectiveness is much more important than speed.

It’s also important to consider the role of chance in your results.

In this example, one version of a page generates a 31.3% conversion rate, while another variant generates a conversion rate of 40.7%.

significance 1 ab testing

Does that mean that those additional conversions can all be attributed to the differences on the page, like new buttons and bold copy?

No.

With a relatively small sample size of 236 users, there could be tons of other factors at play. Maybe more of the users who saw the second variant were already planning to convert. Or maybe more of them were simply in a better mood.

Either way, this sample size isn’t enough to warrant making a permanent page update.

So how do you know when to stop?

Answers to this question vary by who you ask, but ConversionXL recommends that you run your tests until:

  • They’ve had at least three weeks to collect data (but preferably four)
  • You’ve reached your pre-calculated sample size
  • You’ve achieved statistical significance of at least 95%

It’s important to note that they recommend you run your tests until all of these conditions are met — not just until whichever comes first.

And still, many experienced site testers have their own ways of determining whether a test is complete.

Ton Wesseling, for example, recommends that site owners test for “one purchase cycle”:

More traffic means you have a higher chance of recognizing your winner on the significance level you’re testing on! … Small changes can make a big impact, but big impacts don’t happen too often –  … so you need much data too be able to notice a significant winner.

BUT – if your test lasts and lasts, people tend to delete their cookies, … so when they return in your test, they can end up in the wrong variation. So, when the weeks pass, your samples pollute more and more… and will end up having the same conversion rates. Test for a maximum of 4 weeks.

But Peep Laja claims that he would “not believe any test that has less than 250-400 conversions per variation.”

So what should you do if you’ve run your test for four weeks, but still haven’t hit your target sample size?

In general, it’s best to collect more data than less — so continue running your test.

And make sure to collect sufficient data before calculating statistical significance.

If you’re unfamiliar with the term, it means that you’ve achieved results that can’t be attributed to chance.

Factors like sampling errors and probability can throw off your results. Achieving statistical significance means (in theory) that you can be confident in the fact that your data isn’t being skewed by those factors.

Determining whether you can reliably attribute improvements to specific changes can be challenging, and calculating statistical significance is meant to help testers know for sure.

In the previous version of Google’s A/B testing tool, Content Experiments, users could fix the “confidence threshold” to determine the minimum confidence level that must be achieved before a winner could be declared.

The higher the threshold, the more confident you could be that the winning page would outperform other variations in the future.

Unfortunately, this feature didn’t carry over to Optimize. This means you’ll need to do a little more of the work to determine if and when you can be confident in an experiment’s results.

You can start by deciding when to analyze a test’s results before you even launch it.

Some site owners do this by selecting a sample size or number of conversions, then stopping their test as soon as they hit that number. And while this can be a good way to make sure you don’t end your tests too soon, that target sample size is ultimately just a guess.

Instead, you can use tools like this sample size calculator to determine an effective sample size goal.

ab testing 2018

Start by entering the conversion rate of your existing page as a baseline, then set the minimum improvement you want to see. This will tell you exactly how many users you should aim to have for each variation before drawing any conclusions.

It’s important to note, though, that this calculator is based on the idea that the higher your expected uplift, the smaller sample size you need. So if you reach your target sample size but don’t see that target uplift, your results are not statistically significant.

Once you’ve hit your target sample size, you can use tools like this Kissmetrics A/B significance test to determine whether your results have statistical significance.

kissmetrics tool

Enter the number of visitors and conversions for each variation, and you’ll see a simple report with your results.

kissmetrics ab results

If you see a confidence rate of at least 95%, you can be fairly sure that your results were not due to chance — as long as you hit your target sample size before running this calculation.

To illustrate why this is so important, I’ll refer back to that original ConversionXL experiment that showed the new variant losing to the original page by 89%.

The entire point of calculating statistical significance is to arrive at definitive conclusions. So entering that data into a statistical significance calculator would’ve shown that there was a margin of error, right?

Nope.

Here’s what the results looked like when plugged into a significance calculator:

conversionxl significance

This shows an improvement of over 800%, with 100% certainty.

Of course, we already know that the “100% certainty” part is wrong — but without that knowledge, these results would look pretty convincing.

In fact, some site owners would stop the test right here. They’d implement the winning variation, and maybe even use their new “insight” to change dozens of other pages on their site.

This would not only not lead to a significant improvement in conversion rates, but would also be a huge waste of time.

So before you calculate statistical significance, make sure you’ve collected enough data that your results can actually be significant.

And as a caveat, even if you do reach a large sample size and statistically significant results within a few days, it’s important to run your tests for at least a full week.

In fact, it’s best to run your tests in intervals of full weeks. So, for example, if your test starts on a Friday, end it on a Thursday.

It’s extremely common to see fluctuation in traffic and user behavior over different days of the week. This is especially true when comparing weekdays to weekends, but it’s often unpredictable.

For example, in this report, Thursday’s 4.26% conversion rate is drastically higher than Saturday’s 2.43%.

dayoftheweek ab testing report

If this site owner had run a test from Sunday through Wednesday, their results would be very different than if they’d run it Tuesday through Saturday.

These variations can be attributed to any number of reasons.

For example, if you run a restaurant, a user might visit your site during the week as they’re planning their weekend. They’ll check out your menu and hours, then leave.

Then, they might return to your site the following Saturday to make online reservations.

If several of your users follow this same process, this could result in a huge difference in conversion rates — and lead you to believe that your tests aren’t producing the results you want.

So as you run your A/B tests, it’s important to be patient. Accumulate a large enough sample size, aim to achieve statistical significance, and be careful not to collect skewed data.

Letting your tests just run requires patience, but is more than worth it when you’re able to make changes that have a real impact on your business.

2. Test your navigation

Your navigation is one of the most important parts of your site. It plays a major role in how users make their way through your pages and interact with your content.

So even though you might think that your navigation is intuitive and user-friendly, it’s a good idea to run some tests to make sure that’s really the case.

Even minor issues can throw your users off and prevent them from moving through your site, which can negatively impact your conversion rates in a big way.

Within your navigation, there are several different items you can test. But one element that can make a huge difference is the copy you use in your menus. Your visitors should know exactly what to expect when they click a link.

If they don’t, they could be confused when they land on a page that isn’t what they were expecting. And even worse, they may never find the correct page for their needs, simply because the menu copy didn’t include what they were looking for.

For example, online form builder Formstack used to have an item titled “Why Use Us” in their main navigation. Then, they created a variation that linked to the same page, but with the copy “How it Works.”

test navigaiton

This seems like a relatively inconsequential change, right?

Wrong.

Changing these three small words increased page traffic by 50% and led to 8% more free trial signups.

Adopting the same language that your audience uses can go a long way in helping them find the content that they want.

And beyond that, you can also try adjusting your navigation to reflect the way that users look for information.

For example, Bizztravel Wintersport specializes in skiing holiday packages in the Alps. After digging into their Google Analytics data, they found that users most often uses the site search feature to look for ski village names.

They also found that only 23% of their visitors arrived on their homepage, and the rest were pushed to navigate by country, then by region, then by village. This meant that it was taking an average of five clicks before a user arrived at the ski village they were looking for.

It was clear that the navigation system did not reflect the way that users preferred to find information for planning a vacation. Their original navigation and region page looked like this:

Bizztravel.control

So even if a visitor arrived on the site knowing exactly which ski village they wanted to visit, this navigation forced them to first select a region. If they didn’t know the region, they’d need to guess — then repeat the process until they guessed correctly.

To address this issue, the company restructured their navigation menu to include direct links to ski villages, as well as ski village recommendations and country flag buttons that let users browse villages by country.

They also removed links with general company information from this main navigation menu. As a result, the new menu looked like this:

bizztravel variation final ab testing

As a result, they achieved a 21.34% increase in goal completions with a test result confidence level of 97%.

The new navigation helped users find exactly what they were looking for, because it was designed with their browsing habits in mind.

So if you’re looking for an impactful way to improve your site’s performance, your navigation could be a great place to focus your efforts.

And to take things a step farther, you can also consider the actions a user takes beyond their initial menu selection. In fact, you can use A/B testing to improve your entire user flow.

For example, a basic user flow typically looks like this:

sample user flow

Essentially, your user flow describes how you want visitors to move through your pages. Most sites have multiple goal flows, with each leading to a specific conversion.

In the example above, this hypothetical site owner has come up with three possible paths a user could take on their site. Although there’s some overlap between certain steps, they’re all designed to function independently of one another.

But in some cases, users need to make it through multiple flows before they make the main conversion.

stacked flow

This is called a stacked user flow, and is particularly important for companies selling high-value products.

After all, you can’t expect a visitor to spend thousands of dollars during their first interaction with your brand — so a simple, three-step process resulting in a sale is unrealistic.

But it’s not unrealistic to expect a first-time visitor to sign up for your email list. And after receiving several relevant emails, that visitor will be more likely to trust your company enough to make a purchase.

Of course, these are only two examples of possible user flows. Your company’s ideal flow depends on your industry, business model, and audience.

So in order to design an effective model, you need to know what problems your visitors want to solve, what they need in order to solve those problems, which features of your products are most important to them, and what their doubts and hesitations are about purchasing.

Then, you can design your flow to answer all of their questions and highlight the ways that your company can help them reach their goals. But after you’re satisfied with what you’ve created, it’s still mostly hypothetical.

Just because you want your visitors to follow a specific path to conversion doesn’t mean that they will.

Fortunately, once you’ve created your ideal path, you can use A/B tests to drive your visitors to follow it.

The method you use to do this is the same as what you’d do to run any other A/B test on your site. But instead of focusing on driving high-level conversions, you’ll look for ways to guide users to the next step in your user flow.

So, for example, part of your ideal flow might involve moving users from a landing page to a specific product page. To make this happen, you could run an A/B test on that landing page with the goal of increasing clicks on a link to that product.

Then, you could create similar tests for the other important pages in your user flow.

Many of the goals you optimize for in these tests may seem insignificant. For example, increasing clicks to a specific informational page won’t show the immediate revenue increase that increasing clicks on a conversion button can.

But these small changes can help you move users through your site in a way that helps them evolve from first-time visitors to qualified leads to valuable customers.

And if you’re looking to attract long-term clients for your business, this might be even better than an immediate increase in conversions.

3. Utilize secondary objectives

Though your primary objective may be the reason you’re running the A/B test in the first place, setting secondary objectives will help you gain even more actionable insight.

secondary objectives

For example, if you run an ecommerce site, your primary goal with a test might be to get users to add a specific product to their cart. In order to achieve this, you could alter that location of different buttons and calls to action on the page.

And although you’ll focus on the changes in conversion rate for that primary goal, it’s unlikely that this will be the only metric that’s affected by your changes.

Take advantage of your ability to add multiple secondary objectives in Google Optimize to keep track of these additional impacts. If you already have other custom goals set up, this is your best option.

You can use this feature to monitor smaller conversions like loyalty program sign ups or product feature views.

If you don’t yet have many conversion goals set up for your site, you can also use the default objectives like bounce rate and pageviews. There’s no reason not to collect this data when you have the ability to — and looking over it can help you get a more well-rounded understanding of the impact each change makes.

4.Test before launching a redesign

There are many reasons you might decide to completely redesign your site.

Maybe your company is rebranding. Maybe usability issues are impacting how users engage with your content. Or maybe it’s simply been a few years since you first launched your site, and your design now feels dated.

These are all valid reasons.

Unfortunately, no matter how solid your reasoning, launching a redesign can wreak havoc on your conversion rates.

As an extreme example, take a look at what happened to Digg after launching a redesigned site.

digg redesign ab testing

As one of the original social bookmarking sites, Digg was an extremely popular site for sharing and discussing online content.

But as social media sites like Facebook and Twitter gained in popularity, Digg decided to launch a redesign with a focus on social networking. This redesign was designed to highlight content shared by a user’s friends, instead of content that received the most upvotes from strangers.

It’s fairly safe to assume that Digg did not test this new design with their users — because in the month following the launch of their redesign, traffic fell a whopping 26%.

To be fair, this was a more drastic change than you might be planning to make with your site.

Still, even making minor changes to elements like your navigation can have a major impact.

Users who visit your site often are used to your current setup. They know exactly what to click in your navigation, where to scroll on a page to find the information they need, and how to perform key conversions.

No matter how great your new design, changing things up will throw these users off.

But if you focus on usability from the start, and test your variations before choosing a final design, you can select the version that’s most intuitive and user-friendly.

5. Link your Optimize account with AdWords

One of the biggest advantages of running A/B tests with a Google-owned tool is how easy it is to integrate data from other Google properties.

If you run PPC campaigns with AdWords, you can link your accounts to improve your targeting and run tests tailored to specific campaigns.

Though this can help you run more effective tests in a variety of ways, its most valuable use is for improving landing pages. Each time a user clicks one of your PPC ads, you spend part of your advertising budget to bring them to your site.

That means you need the landing page they arrive on to be as effective as possible at driving conversions.

The easiest way to see how effective your landing pages currently are is with the “Landing Pages” page in AdWords. This report shows performance metrics like clicks and conversions for each landing page, as well as whether each one is mobile-friendly.

Use this report to identify which landing pages on your site are generating the conversion rates you want — and which aren’t.

This alone might help you uncover potential changes. For example, if your landing pages with images outperform those without across the board, that could be an easy fix.

But in most cases, it’s not that simple.

Users who arrive on your site from PPC campaigns are typically looking for specific information, and a page that’s designed to meet their needs.

But if you’ve ever attempted to create campaign-specific landing pages, you know that figuring out exactly what users want is challenging.

That’s where the Adwords-Optimize integration becomes extremely helpful. You can create landing page variations, then test them based on keywords, ad groups, and campaigns.

For example, let’s say you run a hotel and you want to generate more conversions from your ad set targeting the keyword “family friendly hotels.”

You can use Google Optimize to create a variation of your landing page that replaces your standard hotel photo with a photo of a family at your hotel’s pool. Then, you can test that variation against your original page with visitors who arrive on your site from that campaign.

optimize adwords ab testing

If this variation generates more conversions, you could implement it permanently as the landing page for that campaign. Then, you can continue testing variations of that page with new copy and forms to find out exactly what appeals to that subset of your audience.

6. Test both micro and macro conversions

Not every page on your site is designed to generate sales.

And even on the pages that are designed to generate sales, not all of your visitors will be ready to convert during their first visit.

So if you’re only focusing on those major conversions, it may look like your tests are failing. Fortunately, that may not be the case, and you can get a more nuanced view of your visitors’ behavior by including both micro and macro conversions in your tests.

If you’re unfamiliar with the terms, macro conversions are the most important actions a visitor can take on your site. If you run an ecommerce store, for example that would be making a purchase. If you run a non-profit, it would be making a donation.

But before most of your visitors will be ready to take those larger actions, they’ll take a series of smaller steps, like reading reviews, expanding product details, and adding items to their carts.

These are all micro conversions.

When set up correctly, the micro conversions a user can make on your site should get them closer to performing a macro conversion.

For example, a user isn’t going to purchase a high-value product without a full understanding of its features and confidence in its quality.

Amazon does a great job of presenting all of this information in a straightforward, user-friendly way.

bose amazon 1

When a user lands on a product page, they aren’t forced to immediately buy the product. They can if they want to — but they can also read reviews, watch videos, browse answers to common questions, and learn about special sales and promotions.

Each of these actions will get them closer to being ready to buy the product — or at least they should.

If you’re unsure of whether your micro conversions are effective in driving users to make macro conversions, A/B tests can be a great way to get a concrete answer.

Test variants with different versions of a micro conversion’s calls to action and buttons and set that micro conversion as the primary objective. Then, set the macro conversions on your site as secondary objectives.

If you see a strong correlation, this is a good sign that your sales funnel is working as it should.

And if you don’t, this is a sign that you may need to re-think your micro conversions.

Work backwards from your main conversion goals, and figure out what users need to do before they take those steps. Then, look for ways to encourage those smaller actions.

Create variants, launch tests, and see which elements are most effective at generating the micro conversions that drive macro conversions for your business.

In many cases, these will be small changes — but they can quickly add up and have a positive impact in driving the results you want.

7. Consider how many conversion goals you should have

In the previous tip, I mentioned the importance of having several micro conversion goals throughout your site.

I stand by that.

But it’s also important to make sure that you don’t have too many conversion goals vying for visitors’ attention.

The more calls to action you have on a page, the more options each visitor has to weigh. And in many cases, too many choices can lead to the visitor making no choice at all.

You can eliminate this indecision by consolidating similar conversions. For example, when Whirlpool wanted to increase clickthroughs from its email campaigns, they consolidated four CTAs into one.

As a result, they improved their clickthrough rate by 42%.

Along those same lines, retailer nameOn wanted to redesign their checkout page to reduce cart abandonment.

Their original page featured several unnecessary details like an email opt-in form and buttons linking to product pages.

nameon

They simplified the page by removing all of buttons except for one that showed more information about a welcome bonus, and one that let the visitor “Continue to checkout.”

nameon 2

As a result, they increased their amount of completed checkouts by 11.4%.

In this case, less was definitely more — and that’s a principle that holds true across many aspects of conversion rate optimization.

In another example, SEO company TheHOTH wanted to maximize conversions on their homepage.

The original version featured a video, a signup form, customer logos, client testimonials, and everything else you’d expect from a reputable agency.

hoth original ab testing

And while they were earning a steady flow of traffic, not much of that traffic was translating into conversions or sales. So they decided to create an extremely minimal variant with nothing but the signup form.

variation page ab testing

This extremely basic design drove their conversion rate from 1.39% to 13.13%. You might need to re-read those numbers — but they’re not a typo.

In this case, eliminating all distractions pushed visitors to take the only action they could.

So if you’re looking for a way to increase conversions on a high-value page, achieving your target conversion rate could be as simple as eliminating the other elements on that page.

8. Use customer insight to design your tests

A/B testing can help you uncover which variations of a page get the best responses from your target audience.

But creating those variations in the first place shouldn’t involve guesswork.

Sure, you might base simple choices like button colors on aesthetic preferences — but beyond that, you should aim to create your variants based on what your customers actually want.

For example, when Groove wanted to boost conversions on a landing page, they started with an extensive customer survey. They started by simply talking with their existing customers to get a better idea of the language they used when describing Groove’s services.

Then, they set up an autoresponder email to new customers asking them why they decided to sign up.

When they designed the new landing page, they incorporated their findings into the copy. So instead of explaining their product from the company’s standpoint, they used language that illustrated its value according to actual customers.

groove landing page ab testing

So instead of describing what they had to offer as “SaaS and eCommerce Customer Support,” they explained it as “Everything you need to deliver awesome, personal support to every customer.”

As a result, their conversion rate increased from 2.3% to 4.3%.

It’s important to remember that your customers’ priorities may be slightly different than your own. So even if you offer exactly what they need, it might not come across in your copy.

Spend some time talking to your customers and figuring out what really matters to them, then feature that information prominently on your conversion-focused pages.

Beyond copy, you can also take user experience into account by looking into your visitors’ on-site behavior.

For example, when Nurse.com wanted to improve their conversion rate on a landing page about continuing education, they ran a heatmap test to see where users were clicking.

pasted image 0 38

Their main conversion buttons were receiving a decent chunk of the clicks.

Unfortunately, less-important links were getting more clicks, and some users were also clicking on non-clickable elements.

This showed that the page wasn’t as effective as it could be at driving users to make that main conversion.

To address this issue, they created a variation that eliminated the distracting links, consolidated the two main conversions into one button, and moving the most important information to a page that was already getting a lot of attention.

As a result, users were directed straight to the most important conversion — and the variant generated 15.7% more sales.

ab testing example

If you want to improve a specific page on your site but aren’t sure where to start, heatmaps are an effective way to identify what’s preventing users from converting.

When you use this information to shape your variants, you can be more confident from the start that you’ll see noticeable increases in your conversion rate.

You can also use these kinds of tests to identify areas where your users are getting stuck.

Some of your visitors might be unsure of whether they want to buy from you. These users will need to spend some time browsing your informational pages and learning more about your company.

They might sign up for your email list, return to your site a few times and check out your blog, and engage with you on a social media platform before making a decision.

Other visitors will be ready to purchase almost immediately — and you need to make sure that they can.

To you, your site’s calls to action might be obvious. You know exactly where you’d need to click to make a purchase, and you assume that this is clear for other users, too.

But after spending enough time on a site, it’s easy to become unaware of usability issues that could be preventing visitors who aren’t familiar with it from taking action.

These issues could include unclear calls to action, broken links, and even non-clickable elements that your users think will take them to a conversion page.

Essentially, you should treat anything that makes the process of converting even slightly more difficult like a usability issue.

And with user analysis tools, you can determine exactly what those issues are.

For example, the Crazy Egg Overlay report shows a breakdown of where all of the clicks on your site are going by element — including clicks on elements that aren’t clickable.

screenshot overlay ab testing

Many times that a user clicks on one of those elements, it’s because they want to take action. And every time a user clicks on one of those elements, absolutely nothing happens.

So if you notice that a large portion of your clicks are going to an element that doesn’t help your visitors convert, this is a great starting point for an A/B test.

Try moving your important calls to action to the areas that users are already clicking, or consider linking those non-clickable elements relevant conversion pages.

When you use data to guide your variants from the start, you can focus on making effective changes that improve your site’s user experience.

Conclusion

Google Optimize offers features that are comparative to well-regarded A/B testing tools available today, but at a much lower price tag: Free.

This alone sets it apart as an excellent option for most site owners.

The platform makes it easy to configure and launch tests. So if you’re new to A/B testing, it’s a user-friendly choice that can help you start making data-backed improvements to your site.

And even if you’re not new to A/B testing, it’s difficult to argue that Optimize offers a helpful set of tools for any site owner looking to improve their conversion rates.

Have you used Google Content Experiments? What was your biggest win?

17 Comments

DON’T MISS OUT

Get updates on new articles, webinars and other opportunities:

Today’s Eggspert

This article was written by today's Daily Eggspert. If you would like to contribute as an Eggspert, please reach out to us here.

17 COMMENTS

Comment Policy

Please join the conversation! We like long and thoughtful communication.
Abrupt comments and gibberish will not be approved. Please, only use your real name, not your business name or keywords. We rarely allow links in your comment.
Finally, please use your favorite personal social media profile for the website field.

SPEAK YOUR MIND

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Kenji says:
    May 18, 2018 at 11:00 pm

    Great article about A/B testing. I noticed you wrote about Google’s Optimizer for doing this, but I’m a member of Crazyegg and I noticed you guys have an A/B testing tool on the dashboard. Do you have any articles about how to implement the testing using your own tools vs Google?

  2. amar says:
    June 22, 2016 at 5:51 am

    Hi,

    I have two landing pages in the AB Experiment under Google analytics. Under the advanced option in the settings, I have also selected the option of equally distributing the traffic to both the landing/ testing pages. Here is my question:

    Suppose, at the same time, I am running Google ads & other social media ads, And all of the ads have the landing. final URL to one of the pages (the original page) in the AB experiment. I want to know that is google going to split the traffic generated from different sources like Adwords and social media as well to the two different landing pages which are part of the AB testing?

  3. Anonymous says:
    March 11, 2016 at 4:58 am

    Hey, Thanks for your share !!

    I have quick question about this A/B testing. I have word press blog with 500+ posts, here i want to check with homepage adsense ads placement clicks + impression with new design, so my question was, how to create a testing page for my homepage with new design & in what URL i want to get publish in my server ??

    If homepage seems: http:mydomain.com
    Variation page url should be: ??

    Please suggest me to start a quick testing

    Thanks

  4. Chloe says:
    February 2, 2016 at 6:20 am

    Hello,

    Thanks for this post! Super clear and easy to follow. My test has been running for about 9 days now and I just went to check on it. I’m worried as there seems to be no data (graph is a horizontal line and All Sessions reads 0.00%).

    I clicked the Re-validate button and to check my pages for working code and both pages pass the tests. At the bottom, it also says “Note: Two experiment variations do not appear in the table.” Do you know what I’m doing wrong?

    Thanks!
    Chloe

  5. Jim Parris says:
    January 6, 2016 at 10:27 am

    Since I don’t think anyone mentioned this yet, it is important to keep in mind that this is a javascript-based redirect.

    It has to read the javascript, execute, confer with Google about which segment the user is in (to show experimental or original page), and then redirect the user.

    This causes a different experience.

    Also, the url will change to the experimental version (obviously).

    For beginners, and if nothing else is possible, this is a very convenient solution for quick A/B testing.

    However, for serious testing stick with logic implemented on the webserver or load balancing level that uses server-side redirect logic to avoid the bad experience of a page-half-loaded-redirect and to hide the different URL.

    • Daniel says:
      January 19, 2016 at 2:55 pm

      @JIM, is it really possible to hide the different url? Do you use some proxy redirect? I guess you cannot use a regular temporary redirect (302).

      The site I would like to a/b test is accessed through a cdn. I guess this makes it even harder to hide the url. As soon as the html is cached every user will get exactly the same html. Or?

  6. Ivaylo says:
    November 18, 2015 at 3:43 am

    Nice post. Very informative. I have a quick question though: If one have several pages (variations) to test is it better to test pages 1 vs 1 at a time (A vs B) for 3 weeks lets say and then the next 3 weeks to test another 1 vs 1 (B vs C) and so on…
    OR is better to run the test all at once testing 1 page vs. all the other variations (A vs B,C,D,E…) at once.

    In other words which is more accurate: – adding multiple variation pages or testing one variation page at a time?

    ty

    • Shane Barker says:
      November 20, 2015 at 12:23 pm

      Ivaylo,

      Google Analytics content experiment is based on A/B/N model where you can check multiple pages at a time. In your case, you can start experiment all pages at a the same time, but make sure to get the best page result you need to run the test for at least 2 months (More is good) so that you can analyze data and get the best page with the highest conversion rate.

  7. Alexander says:
    October 30, 2015 at 6:06 am

    Thanks for publishing this post. I found it through BuzzSumo. As I’m workinh in CRO area I found this guide useful for the beginners. And I have to say it’s quite difficult to use Goggle Content Experiment as it demands participation of IT team.
    Thee is a couple of tools for A/B testing but the best known of them have quite poor GA integration.
    There is an A/B testing tool with visual editor and full GA integration changeagain[dot]me. It’s for marketers without coding skills and suites for the beginners.

  8. Jeffrey says:
    September 25, 2015 at 1:48 am

    Great post.
    Is it possible to add the code via Google Tag Manager?
    I’ve been looking for this feature, but haven’t found it…

  9. sam vanderbempt says:
    September 18, 2015 at 3:39 am

    Hi Shane,

    I was wondering if there’s a way in GA to perform an A/B test on a group of pages. For example, let’s say I have an ecommerce site where you can buy a certain product. I want to test the layout of my product detail page, to try to get more people to click the buy-button. Of course, my website has hundreds of toys, and testing it on one specific product page would not be enough.

    Is there a way to test all the pages in mywebsite.com/toys/… versus mywebsite.com/toys-test/… if all product URLs are identical besides the /toys/ or /toys-test/ directory? It’s easy to change the layout for all my product detail pages at once, but can I also test all of these pages?

    Thanks!

  10. Shalin says:
    August 23, 2015 at 9:24 pm

    Does the second page get indexed and get traffic seperately as well? In that case wouldn’t that be mixed data?

    • Shane Barker says:
      September 26, 2015 at 1:04 am

      Shalin,

      Yes, it could get indexed, but you need to place the meta noindex tag to the test page to keep the data clear and get precise results.

      • Mark says:
        December 13, 2016 at 4:03 pm

        This is incorrect, Google says not to us a noindex tag. https://webmasters.googleblog.com/2012/08/website-testing-google-search.html

        Use rel=“canonical”.
        If you’re running an A/B test with multiple URLs, you can use the rel=“canonical” link attribute on all of your alternate URLs to indicate that the original URL is the preferred version. We recommend using rel=“canonical” rather than a noindex meta tag because it more closely matches your intent in this situation. Let’s say you were testing variations of your homepage; you don’t want search engines to not index your homepage, you just want them to understand that all the test URLs are close duplicates or variations on the original URL and should be grouped as such, with the original URL as the canonical. Using noindex rather than rel=“canonical” in such a situation can sometimes have unexpected effects (e.g., if for some reason we choose one of the variant URLs as the canonical, the “original” URL might also get dropped from the index since it would get treated as a duplicate).

        • Anonymous says:
          March 6, 2018 at 3:19 am

          If I am using canonical with index then what happens?

  11. Mike Henderson says:
    June 3, 2015 at 8:30 am

    We’ve used Google Analytics Content Experiments for all our testing thus far. It’s an easy to use tool, but for advanced testing you’ll want to go with a dedicated testing platform. The nice thing about testing through GA is having all your data in one place. You can also create segments within your test results to see how your A/B test stacks up on mobile, with new visitors, etc.

    Great post Shane!

    • Shane Barker says:
      June 6, 2015 at 5:41 pm

      Indeed, we can use segmentation within A/B testing for better insights. I would surely write a further post on it 😉

      Thanks Man!!!

Show Me My Heatmap

Got my mind pleasantly blown today by @CrazyEgg's heat mapping tool. Thanks @Assafslv.

Brett Brownell

@BrettBrownell

What makes people leave your website?