A term used to describe test methods or algorithms that continuously shift traffic in reaction to the real-time performance of the test.
We lied to you. For years, we, as providers of an A/B testing tool, told you it was easy. We made a visual editor and pretty graphs and gave you wins on engagement or a lower bounce rate, but we did not really contribute to your bottom line.
Most A/B test and conversion optimization ideas have their beginnings in web analytics reports. And there are countless types of reports that can provide inspiration for meaningful A/B testing. But still, it is extremely hard to come up with a successful test hypothesis using only quantitative data.
If you’re reading this post, you already know how CRO (“conversion rate optimization”) can help you increase revenues and create better customer experiences. The problem now is: how do you decide what to test?
Confidence Interval: A range of values calculated such that there is a known probability that the true mean of a parameter lies within it.
Confidence Level: The percentage of time that a statistical result would be correct if you took numerous random samples.
Margin of Error: An expression for the maximum expected difference between the true population parameter and a sample estimate of that parameter.
Sample Size: The number (n) of observations taken from a population through which statistical inferences for the whole population are made.
The debating, campaigning and speculating will soon be over. In just a few short weeks, our new President will be elected. Somehow, through this long and arduous process, the parallels between website A/B testing and the election became apparent to me as the campaign continued to grind on.