Sadly, a lot of conversion optimizers are wasting their time.
They spend valuable resources on conversion optimization strategies that don’t get results.
In the worst case, their efforts aren’t simply fruitless; they are actually giving them negative results!
This is not just another myths or mistakes article. What you’re going to find out are three, simple, yet detrimental fallacies that conversion optimizers fall prey to.
If you’ve been dabbling in conversion optimization for any time at all, you may have experienced at least one of them already.
Read and heed!
Dumb Strategy 1: Major site redesign.
A lot of optimizers I know like to start from the ground up.
They want to wipe the slate clean, install a new theme, make massive changes, and create a site that is completely primed for torrents of conversions.
This sounds good, but usually I advise against it.
Bad things could happen. What kinds of bad things?
Bad thing number 1: You lose traffic.
What good is your conversion optimization strategy if you don’t have any traffic? Some sites experience massive drops in traffic when they redesign their site.
Also, you can’t resume conversion rate testing until you have enough traffic to have statistically significant and confident results.
Bad thing number 2: You lose any ground you gained with split testing before.
If you were testing your site before, any data you had from those split tests is now rendered useless.
Most A/B tests look at single variations between pages. Take the page below as an example. The treatment example has a different headline. That’s it! The result? A quantifiably higher click-through rate.
If you redesign your site, all that data is now useless. You basically have to throw it in the trash.
Take this example from Optimizely. They tested a website for two years.
All those variants were implemented iteratively over a long time.
As they kept running tests, the conversion team knew for a fact that certain variants were getting the maximum level of conversions.
What if they decided to change?
“You know, guys. Let’s just scrap this. I have an idea for a new site design!”
Bam. They would lose all that information to the whim of a redesign.
That’s a bad idea.
Bad thing number 3: You have to start from the ground up with split testing.
Here’s the good thing about split testing.
It increases in value over time.
Think about it. When you do one test, that’s great. You may gain a little bit of ground, and a few more conversions. Once you implement the results of the test, you have gained ground permanently. And, you know for a fact that control A outperforms variant B.
So you run another test. Again, you get results, implement changes, and your conversion rates improve.
This keeps happening, again and again. Eventually, you create a high performing site that is built on massive amounts of testing data.
Look at what WiderFunnel discovered when they measured the cumulative impact of split testing.
The 36.8% comparison in the right column is a test that WiderFunnel ran on the example website. They compared the very first version of the site (before conversion optimization) with the very latest version of the site (after several split tests).
What they discovered was that the cumulative impact of the tests (32.8% increase in conversions) roughly matched the overall difference between original control A and latest variant B (36.8%).
What did they do? They kept testing for six more months.
Guess what happened. They kept gaining conversions!
So, they did it again for another year.
If you decide to redesign your site, you might lose valuable data like this and be forced to start over.
You have no data on the new site’s performance except for the fact that it might look nicer. But does this guarantee higher conversions? Absolutely not.
Bad thing number 4: You don’t have data to show that the original design is superior.
Most of the time, businesses decide to redesign a website for all the wrong reasons:
- We need a fresh look!
- I found a new template!
- The stakeholders didn’t like it!
- The CEO got back from vacation and got inspired to redesign it.
- Our competitor is doing X.
- It looks outdated.
- I like pink!
The idea that you need to completely redesign your website now and then is a myth.
Incremental changes based on split test data, however, are smart.
There are times when you probably need to redesign your site. For example, if your site is not mobile friendly, then you need to make it mobile friendly. If your site is in a 1980’s style template with agonizing UX, then you should probably redesign it.
What if a redesign is required? Is there a solution?
Solution 1: Evolutionary Site Redesign (ESR).
Make continuous improvements to the site over a long time, rather than wholesale changes at once.
Let me just warn you. This process could take a really long time.
Besides, you might not accomplish the instant pivot that you need to. For example, if your entire site needs to adopt a responsive design, you’ll need to make a significant change, not an evolutionary one.
Those exceptions notwithstanding, many people believe a site should be redesigned once every 5 years.
WiderFunnel compares the ESR model to the traditional model, i.e., designing a new site every five years.
As they point out, the problem with this approach is that your site could suffer underperformance for a period of time following the redesign.
That’s why the sequential, evolutionary, iterative approach to design improvements is superior. You risk less and are likely to gain more.
Solution 2: Create a new version, split the traffic, and test improvements.
The solution here is to make a new website version to operate concurrent with the original version. Instead of just switching completely over, test the new site’s performance by siphoning some of your traffic to the new version.
What you discover might please you.
The following landing page redesign was radical, but they got the conversion lift that they were looking for.
Maybe your conversions will go up. Maybe they’ll go down. But at least you’ll know.
Are you still itching to redesign your site? Okay.
Let me close off this section with one final example:
Google is the most popular website on the Internet.
They launched in 1998.
How much has their homepage changed since the very beginning?
Not a whole lot.
You don’t need massive changes to make major improvements. Sometimes, small and constant changes are best.
In other words, redesigning your website as a default optimization alternative is poor conversion strategy.
Dumb Strategy 2: Not heatmap testing.
A lot of conversion optimizers are not using the right kind of data to analyze and improve conversions.
I strongly encourage every marketer, designer, UX, IA, and conversion optimizer to run the following types of tests:
Test 1. Heatmap or click visualization
This test shows you exactly where people are clicking on your website.
Test 2. Scroll analysis
A scroll map shows how far down the page your visitors are scrolling.
There is an additional type of heatmap testing that measures users’ eye movement. You’ve probably seen this kind of image before. It shows where users spend the most time looking on a page.
Eye tracking studies have to be performed using special equipment with volunteer users. It can be difficult, expensive, and time-consuming to perform these types of tests, especially if the potential costs outweigh the benefits.
Instead of running your own eye tracking studies, read up on what kind of improvements could make a difference.
That being said, any site that is not implementing heatmap (click, scroll) testing as part of their regular process is making a mistake.
Let me explain why.
If you look at the chart below, you’ll see that when one website decided to run a click test on their homepage, they discovered that 12 of the clicks on their homepage were on a single image.
So what, right?
That image had no link.
It was a dead image. Nothing happened. It didn’t guide the user down the funnel; it didn’t help them engage with the site. Instead, the image with no link may have frustrated the user.
How much revenue did those thousands of wasted clicks cause the company?
I don’t know. What I do know, however, is that failing to conduct heatmap tests is a major mistake.
The same thing could be happening to your site: people scrolling or clicking to the wrong places which equates to missed opportunity and possibly lost revenue.
There’s only one way to find out: test it.
Dumb Strategy 3: Exclusively testing micro conversions.
There are two main types of conversions: micro conversions and macro conversions.
What’s the difference?
- Micro: little goals
- Macro: big goals
Micro conversions are little conversion goals. For example, if a user clicks on the orange button for my webinar, that’s a micro conversion.
The hope is that a micro conversion now will lead them to an eventual macro conversion later.
Macro conversions give you leads, or if you have an ecommerce site, they give you revenue from a sale.
If you’ve ever shopped Amazon, then you have probably completed their macro conversion, i.e., placing your order.
Along the way to your macro conversion, you may have completed some micro conversions — looking at images, reading reviews, checking out special offers, etc.
Search Laboratory breaks down three main differences between micro and macro conversions:
Now, before I proceed, I want to make a very important point:
You should test both types of conversions.
Why? Because both matter.
Micro conversions matter, because they lead to larger goals.
Macro conversions matter, because they are the larger goals.
However, here’s where we have a problem.
It’s quick, easy, and fun to test micro conversions.
Making a quick change to a button color, an email capture, or social sharing boxes is not a huge deal. You can test it, get results, and move on.
Yes, but what overall impact did those micro conversions make?
Maybe a better question is this: Did you improve your revenue?
A lot of optimizers are spending their days mucking around with micro conversions. What do they have to show for it?
If your conversion optimization methods are not contributing to the overall revenue of the business, then you are wasting your time.
Harsh, but true.
The solution here is simple: test micro conversions and macro conversions.
Avinash Kaushik has an old but helpful article on testing both types of conversions. He dives into the issue from a Google Analytics perspective, which is his jam.
Here’s what he writes:
Focus on measuring your macro (overall) conversions, but for optimal awesomeness identify and measure your micro conversions as well.
It’s helpful to visualize the conversion process as a path.
Macro conversions come at the end of the path. That’s why you should test them.
Micro conversions pave the way to macro conversions. That’s why you should test them both.
Conversion optimization can be one of the best things that ever happened to your business.
Believe me. I’ve tried and won.
In a word, conversion optimization is awesome.
But just because a marketing technique is awesome doesn’t mean that it’s foolproof. Conversion optimization will give you more revenue only if you’re doing it right.
To recap, here are the three dumb strategies that you should stop doing right away:
- Major site redesign
- Not heat map testing
- Testing only micro conversions
Have any others to add to the list? Let me know the strategic mistakes you’ve seen while conversion optimizing.
Latest posts by Neil Patel (see all)
- Call-to-Action Button: How To Create High-Converting CTAs in 15 Steps - December 26, 2018
- How to Create the Best Heatmap Online: Your Complete Step-by-Step Guide - December 21, 2018
- Hungry for Fresh Lead Generation Ideas? Gobble These 9 B2C Ideas Up - November 30, 2018