DON’T MISS OUT

Get updates on new articles, webinars and other opportunities:

12 Conversion Optimization Trends That Are Completely False

by Neil Patel

Conversion optimizers are great people. But sometimes, they begin following misleading trends, spreading myths, and believing lies.

Some popular study or a well-known blogger comes out with some “new advice” or “shocking result,” and suddenly, people are doing things that could be hurting their conversion rates, skewing their test results, or distracting their focus on conversion optimization.

Here are twelve such trends that could be hurting your conversion optimization.

1. Following someone else’s positive test results will give you positive results. FALSE

It pains me when I see optimizers reading case studies, and then making changes on their website because someone else had a successful split test.

It happens like this. An optimizer reads an article…

icouponblog-9-2015

The optimizer thinks, “OMG! I’m going to remove the security badge from my site, too!”

He makes the change without testing, and his conversion rates crash and burn.

Following someone else’s test result is misguided. The only test results that you should act on are the test results that you get from your own site, and even those should be carefully implemented.

2. Button color changes are important. FALSE

I’m using “button color” as an example. It could be any variety of things — testing insignificant click-through rates on little superfluous buttons, making tiny changes to kerning, or anything else that’s irrelevant.

We all want to see big results from little changes, but you shouldn’t focus on the little things first. Focus on the big things instead.

Focus on making major changes that measure conversions, not minor changes that test micro conversions.

3. More testing is always better. FALSE

I’m a believer in quality over quantity. Obviously, the more you test, the more data you’ll get. But just testing for testing’s sake is silly.

More tests are not always better. Better tests are better.

Make sure you’re testing the right things — things that impact conversion rates, truly affect users, and will improve your website.

4. Leaving conversion optimization to the designer. FALSE

Designers design websites. Conversion optimizers improve the conversion rates of websites.

As good as a designer is, he or she should not have to bear the burden of optimizing the site’s conversion rates. Depending on the scope, such a job requires the skills of graphic designers, developers, UX designers, information architects, and engineers.

Conversion optimizers should lead the charge for higher conversion rates, and not leave it in the hands of any one person.

5. Conversion metrics are the only thing that matter. FALSE

Conversion optimizers can become easily obsessed with a single metric — conversion rates alone.

Conversion rates are very important. Please don’t underestimate their power and significance. But conversion rates are one part of the business numbers game.

There are a lot of factors to keep in mind, each of which can be considered business-critical KPIs. Keep conversion rates as a top-priority metric, but don’t allow them to cloud your vision of other metrics that matter.

What other metrics? It depends on your business, but here are some that I always carefully watch:

  • Revenue
  • Cost-per-acquisition or Customer Acquisition Cost (CAC)
  • Lifetime Value (LTV)
  • Attrition Rate
  • Traffic
  • Ranking

6. 3-Clicks only. FALSE

Conversion rate “best practices” dictate the “3-click rule.” According to the rule, a user will only complete a conversion action if it takes three clicks or fewer. As a general practice, UX designers try to make pages accessible in 3 clicks.

Multiple studies have disproved the validity of the 3-click rule.

clicks-to-completion

Source

In the test results graphed above, UIE discovered that some users kept going to 25 clicks!

There’s no such thing as a “three-click rule,” especially for conversion rate optimization. Some tests demonstrate that more clicks actually improve conversions.

control-and-variation-screenshot-9-2015

What should you do? Run your own tests.

7. Believing that a test is always right. FALSE

Most of the time, tests provide the best and most actionable results — a clear sense of direction.

Sometimes, however, tests are skewed. Here are some of the mistakes that people make:

  1. Not running A/A tests
  2. Running a before and after test instead of a split test
  3. Running multiple tests at a time
  4. Not spotting false positives in a test
  5. Not segmenting their tests
  6. Testing more than one variable
  7. Testing a page that has no traffic or conversion activity
  8. Testing micro conversions instead of macro conversions
  9. Ending a test too soon
  10. Not integrating analytics (and filtering out their own traffic)
  11. Not coming up with a test hypothesis
  12. Testing during the wrong time

There are plenty of things that could jeopardize your split testing. If you’re not testing correctly, then you’re not getting actionable results. Test your testing, and then you can become more confident.

8. Statistical significance proves it’s right. FALSE

“Statistical significance” is the rate of confidence that a test is reliable. A test with, say, 99% confidence has only a 1% chance of being wrong.

What could be wrong with measuring the statistical validity of a test?

What if your statistical validity threshold is too low? This is the problem with popular testing software. Their validity threshold — the rate at which they call a test “valid” — is so low that you might think that the test is statistically valid before it actually is.

Instead of relying on statistical validity alone, run a test at least two weeks and until you reach 100 or more conversions.

9. Small changes mean big results. FALSE

We all love to read the stories of the “tiny” change that had “explosive results.”

These make for nice stories, but they rarely happen. The problem with such stories is that they mistakenly cause us to believe that we’re going to get all our conversion upticks by implementing minuscule alterations.

Small changes might give you big results. But be realistic. More typically, it’s the big changes that will get big results.

Test everything, but test the most important things first.

10. CRO is mostly about subtle manipulation tactics. FALSE

Conversion rate optimization isn’t about manipulation. It’s about user experience.

Search engine optimization and conversion rate optimization have come to the point that it’s not as much about gaming the system as it is about providing the best possible experience for the user.

If you can create a better user experience, then your conversion rates will improve. It’s just that simple.

11. The shorter the better. FALSE

Conversion optimizers love brevity

  • “Short landing pages are the best!”
  • “Short forms are better!”
  • “Short click paths convert higher!”
  • “Short shopping carts are better!”

But what if longer is sometimes better?

Take Salesforce for example. You’d think that a multibillion-dollar company would run a few split tests, or at least realize that long forms kill conversion rates, right? Then why do all their lead forms have seven fields? Isn’t that too many?

sign-up-once-free-demos

Maybe they realize that shorter isn’t always better, specifically for their particular audience and product. In their situation, perhaps the longer forms generate more qualified leads and reliable conversions.

Shorter isn’t always better.

The same thing holds true for landing pages. In many cases, longform pages outperform short form controls.

control-treatment-more-conversions

Source

The bottom line is that you have to test your own site in order to get actionable information.

12. Most conversion practices are just common sense. FALSE

If you think that conversion optimization is just common sense, then you’ll need to rethink this assumption.

Conversion optimization isn’t a result of following “best practices” or one’s natural intuition. It comes from relentlessly testing every facet of your site.

Here’s why. Every target audience is different. Your customer base is comprised of disparate users who come from a variety of approaches, backgrounds, understandings, and experiences. You can’t simply impose your “best guess” on a conversion practice.

Instead, you need to look at aggregate data that assesses both demographic and psychographic features of your audience.

When you come up with test results, you’ll be surprised at the things that will upset your hypotheses and go against common sense.

Conclusion

Conversion optimization can be risky business, but only if you believe the myths, follow the trends, and get sloppy.

Conversion optimization is the single most effective way to reliably boost conversion rates. Watch your step, and optimize carefully. But by all means, optimize!

What conversion optimization trends do you view with skepticism?

3 Comments

DON’T MISS OUT

Get updates on new articles, webinars and other opportunities:

Neil Patel

Neil Patel is the co-founder of Crazy Egg and Hello Bar. He helps companies like Amazon, NBC, GM, HP and Viacom grow their revenue.

3 COMMENTS

Comment Policy

Please join the conversation! We like long and thoughtful communication.
Abrupt comments and gibberish will not be approved. Please, only use your real name, not your business name or keywords. We rarely allow links in your comment.
Finally, please use your favorite personal social media profile for the website field.

SPEAK YOUR MIND

Your email address will not be published.

  1. Benjamin says:
    September 20, 2015 at 6:52 am

    Yes Neil,

    Test, test test. You can see what works for other people. But what works for them, may not work for you! You have to evaluate what is best for you website conversion by testing for yourself. A lot of people think one size fits all. But we see that is not the case because what you posted in this article.

    Thanks for this, I will be keen to keep testing and see what works best for my site. 🙂 Take care.

  2. Ashik says:
    September 18, 2015 at 11:34 pm

    Hey, Neil

    Amazing piece of work, this post brought me back to my old mistakes which I used to commit in my blog.
    I don’t know these CRO misconceptions until I got brutally experienced. I blindly implement the Big brands CRO strategies and look for any increase in the conversion. I haven’t noticed any big surge in the graph, obviously it decreased.

    My Recommendation,
    Never copy the conversion strategies blindly from the brands, it won’t work unless you are lucky!

  3. Peep says:
    September 18, 2015 at 8:32 pm

    “Instead of relying on statistical validity alone, run a test at least two weeks and until you reach 100 or more conversions.”

    Sorry Neil, but this is horseshit. There are no magic numbers like 100 or 200 or whatever. It’s math, not magic. In most cases 100 conversions is going to be WAY too little – and its never a fixed number.

    1) Calculate the needed sample size ahead of time, and run the test until you reach it
    2) Test for 2-4 business cycles
    3) Now look at the statistical significance and power level

    When to actually stop an AB test:
    http://conversionxl.com/stopping-ab-tests-how-many-conversions-do-i-need/

Show Me My Heatmap

Taking @CrazyEgg for a test run @IndiciaNL. First Plus.. Smooth #TagManager implementation!

Rogier | is hier

@Swaaijmans