How To Create An A/B Testing Research Framework For Faster Iterations & More Wins

by Tomi Mester

Last updated on September 19th, 2017

Here’s the most common problem I see when it comes to conversion rate optimization (CRO):

Not putting enough energy into conducting the proper initial research into what to test.

One round of user tests, then change the website.

One heatmap, then change the website.

One cohort analysis, then change the website.

This can lead to very bad testing habits.

When you are trying to improve your conversion rate, you should do multiple types of research. You should be using:

  • Qualitative methods (e.g., usability testing, 5-second testing, surveys, etc.)
  • Quantitative methods (e.g., heatmapping, cohort analysis, funnel analysis, segmentation, etc.)
  • And of course, A/B testing


More than that: you need to have a research strategy!

qualitative-quantitative method

In this article, I’ll show you mine. I call it the “Online Research Framework”, and I’ve been using it for years on each and every CRO project. I didn’t develop it by myself. Over time, I’ve talked to several data analysts and scientists at many online companies, and I have found that this is what most of them (most of us) are using.

Let’s see how it looks!

The Framework – a Mix of Quantitative and Qualitative Methods

Here’s the whole process in one picture:

Research cycle

You can think about it as a round of “CRO sprints.” It has six steps:

  1. Qualitative Research
  2. Quantitative Research
  3. Brainstorming
  4. Qualitative Testing
  5. A/B Testing
  6. Implementation

Let’s go through them step by step.

Step 1. Qualitative Research

If you have ever performed data analysis, you know that sometimes it’s really hard to know where to start. So much data, so much information… you need a hypothesis, or at least a hunch.

I find it very handy to get these first hunches directly from the users. I usually start my CRO projects with usability testing. It’s as simple as this: invite one or more of your users/visitors to your office, show them your website or online product, and monitor how they interact with it.

It’s really easy to spot bugs and UX issues this way. Usability testing is considered a qualitative method, because you are working with small sample sizes (usually 4-5 users per round) and you’re not digging into thousands of rows of spreadsheet data.

However, you can get more detailed information than if you were to start out with quantitative research. By the end of your research, you will have a list of 20-30 possible issues — and this list will help you a lot in starting your data analysis.

Finally, you can (and should) try to survey your visitors/users and paying customers. Surveys tend to be easier to set up than usability tests and can provide more responses. But again, do both if you can.

A good question to ask your website visitors is:

“Are you having trouble finding what you’re looking for today?”

A question like this can clue you into possible navigational or website clutter issues.

As for paying customers, surveying them can be even more valuable. For example, you should ask the question:

“Why did you decide to make a purchase from us?”

This question will key you into the real value propositions you provide paying customers. The responses you get can help improve home page copy and other conversion driving language.

If you’re looking for tools to run surveys you should check out:

  • Qualaroo – A really easy to use tool that allows you to ask “pop-up” survey questions to your website visitors.
  • Survey Monkey – Allows you to create various types of surveys. Its biggest benefit is it doesn’t require a web developer to set up.
  • Google Forms – Similar to Survey Monkey, just less polished (more for the DIY crowd).

I know I’m being very website-focused in this article, but the same methodology can be applied to apps as well.

Step 2. Quantitative Research

If usability testing is to find bugs, UX issues, or funnel issues, then data analysis is to prove or disprove them.

Let me give you an example.

Let’s say that during your research, two users didn’t recognize your “pricing” menu at the top of your site. This might be a one-time accident, but there is a good chance it will happen to more of your users.

At this point, it’s a good idea to run a website heatmap and find out how many people exactly clicked that button.

You can add some perspective by analyzing the data in your Google Analytics account too. If you have access to your database (e.g. in SQL or plain-text format), this is the right time to do some correlation analysis too. With that, you can find out if it’s helpful for your visitors to see your pricing page before they go to register, or if it’s actually holding them back.

Regardless of the tool you use, data will be your ultimate evidence about your ongoing conversion issues. Does the data support the initial hypotheses you developed during your qualitative research phase (Step 1)? Nothing has to be conclusive yet; it’s just a good idea to start making notes about these type of associations.

 Website heatmap

Step 3. Brainstorming

“Data beats opinion!”

Or does it?

As I see, data should not beat, but help opinion.

When you have your results from the previous two steps of research, I recommend you pull your co-workers into a meeting room, present the top issues to them and run a brainstorming session. Designers, UX people, marketers, product owners, and everyone else you work with are all going to have insights you might not expect.

Let’s stick with the “users don’t find the pricing page” example. Suppose you found that only 5% of visitors click on the pricing page, but anyone in that 5% has a 200% higher chance to register.

That looks like low-hanging fruit, right? Well, not necessarily. You should always keep in mind that correlation isn’t causation. That’s why A/B testing will be important!

During these workshops, different team members will come up with different ideas:

  • “Let’s emphasize the pricing button with another color”
  • “Let’s move the pricing button on the left menu bar”
  • “Let’s change the font-size”
  • “Let’s rename »pricing« to »plans«”
  • Etc…

And that’s OK. The more creative ideas you have, the better. You don’t have to decide right now: that’s why we have A/B testing.

Step 4. Qualitative Testing

But before you run an A/B test… Consider that you probably won’t be able to test 10-20 versions in a short amount of time.

There are two obstacles:

  1. Unless your website gets a ton of daily traffic, you won’t have enough traffic for 10-20 versions to A/B test. In my experience, an average sized e-commerce store can usually get significant results from 3-4 variations of a webpage, over about one month of A/B tests. You can calculate it for yourself here.
  2. Every variation comes with a development cost in terms of coding, implementation, and so on.

You can be smart about this if you do a few initial 5-second-tests before you conduct your A/B tests. On these tests, you will show each of your new designs to potential users for 5 seconds only. After the five seconds are up, you can ask: “Where was the pricing button on the page?” or “Was there a pricing button on the page?” or even “Can you recall the top menu buttons?” It’s usually pretty clear whether or not your new mock-ups are heading in the right direction.

This is a quick and dirty test method. At least you have a small sample size, with which you can filter out the worst drafts and you can put the best into an A/B test to find out their real performance.

Step 5. A/B Testing

Finally, we have reached the moment of truth.

A/B testing is the ultimate weapon to tell you if a new design (or new copy) works better than the original or not. If you are not familiar with the concept, you can find a nice summary of it here.

At the end of your A/B test you will have a clear answer: “if you change the color of the pricing button to red, it will deliver 40% more click-throughs and 27% more registrations than the original version with 99% statistical significance.”


The idea is this framework will help you produce more wins than spinning your wheels with lots of A/B tests that produce losers. However, a losing A/B test hypothesis isn’t the end of the world. It should be carefully logged into your ongoing testing research. There is a lot of value from a “failed” test for future testing iterations. Be sure to review them every few months.

Step 6. Implementation

When you are done with all the previous steps and when you have hard evidence that your changes will work: then – and only then – you can implement the changes! It was a long process, but now you can make sure that it pays off immediately.

Some additional thoughts – CRO vs. user research

Wrapping up the above, here is the Online Research Framework again:

Research cycle

When I look at this flow, I always have two principles in mind:

  1. When it comes to conversion rate optimization, I like to think in iterations. This is a never-ending project: you will always find something that you could do better. If you don’t, you are not trying hard enough.
  1. The aim of a “conversion rate optimization” project is never actually “to have a better conversion rate.” Sounds odd, I know. But your first intention should always be to understand your users/visitors! A secondary outcome of that might be to have a better conversion rate too, but your users should be your focus.


You don’t have to use the same framework that I do. And you don’t have to use all the above-mentioned methods either. But I definitely encourage you to do multiple types of research, and be more strategic, when you are doing CRO projects.

Hope this article helped you!


About the Author:

Tomi Mester is a data analyst and researcher. He’s the author of the Data36 blog, where he gives a sneak peek into online data analysts’ best practices. He writes posts and tutorials on a weekly basis about data science, AB-testing, online researches and data coding. Find him on Twitter.

*Featured Image Source

No Comments


Get updates on new articles, webinars and other opportunities:

Tomi Mester

Tomi Mester is a data analyst and researcher. He’s the author of the Data36 blog where he gives a sneak peek into online data analysts’ best practices. He writes posts and tutorials on a weekly basis about data science, AB-testing, online research and data coding.


Comment Policy

Please join the conversation! We like long and thoughtful communication.
Abrupt comments and gibberish will not be approved. Please, only use your real name, not your business name or keywords. We rarely allow links in your comment.
Finally, please use your favorite personal social media profile for the website field.


Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Show Me My Heatmap

@AndreaFine @CrazyEgg I love using it. That's when my clients really get it. More than any @Google analytics data.

Dagmar Gatell


What makes people leave your website?