What Conversion Experts Wished You Knew About Optimization

What Conversion Experts Wished You Knew About Optimization

Conversion optimization, while growing in popularity, is still largely misunderstood.

As I see it, there are a small amount of people who really understand conversion rate optimization (CRO). Then there’s another small group of people who really don’t understand it, but are still publishing blog posts about it.

This creates confusion for the vast majority of people – those who are interested in increasing their revenue through CRO, but have no idea how to do it because of misinformation and conflicting advice.

Being at ConversionXL, I get to meet and chat with the most experienced optimizers in the world. Through our agency, I also get to see where the gap is between what we (the optimization specialists) know about CRO, and what we wish business owners and executives knew.

Bridging this communication gap can only be a positive thing – creating a more efficient client/agency relationship, education in-house optimization teams, and as a result, creating more wealth for companies willing to invest in a CRO process.

In addition to my own experience at ConversionXL, I talked to some of the most knowledgeable optimizers I know for this article.

Here are 6 things optimizers wish business owners knew about conversion optimization.

1. CRO Case Studies Are Lying to You

Look, if an A/B testing case study opens the door to CRO for you, then I appreciate that value. But just know, almost all conversion optimization tips and case studies are BS.

Andrew Anderson, Head of Optimization at Recovery Brands, put it even more bluntly, saying “case studies are almost all frauds.” As he explains:

“As is the case with most disciplines, the case studies you see thrown about are just noise and empty marketing."

The same can be said about the loudest industry "experts" as well. While we appreciate that you get interested in optimization from them, following them blindly is the fastest way to fail.

Why, you may ask, do you have to be skeptical of case studies?

Most don’t supply full numbers, so there’s no way of knowing the statistical rigor of the test. I’d bet a great deal of them are littered with low sample sizes and false positives. It’s deceptive, and it may not even be intentional; the author just doesn’t know what they don’t know.

If the results seem implausible, many times it’s because they are (source)

 

And of those that are legit –- how many losing tests did they run before they got that result? Often it’s the insights gained from the losing tests that lead to a strong winner, however, because these case studies are only reporting on wins you’re looking at a pure example of survivorship bias.

Image via MrMoneyMoustache

 

Conversion optimization is also highly dependent on context.

Even if the test was based on good statistics, you’re ignoring context. Different traffic sources, different branding, and different customer segments behave differently and respond to different strategies. One thing that works for a case study might tank your conversion rates, simply because you’re working toward the goal without understanding the process that leads to it.

The best case scenario for CRO case studies is that you:

  • Remain skeptical on the validity of the results.
  • Use them only as inspiration, not as an assumption to carry to your own tests.

Another point (and something we’re working on at ConversionXL Institute) is that the process behind the A/B test (the resources required, the methodology, the research involved, and the iterations attempted) are more valuable than the end result.

2. You Probably Won’t Get Many Massive Wins (Unless Your Site Is Really Awful)

At conferences, when someone gets on stage and talks about their last 400% win - usually from something silly like changing a button color –those in the audience who know better just roll their eyes.

It’s really a misconception that conversion optimization is that sexy. Sure, it is a process that systematically improves the health of your business (a sexy idea to me), but it’s no silver bullet, and you almost certainly aren’t going to triple your revenue with one A/B test.

These success stories are ridiculously self-serving, and mostly serve as public relations for the startup or agency that is publishing them.

Experiment Engine data shows that, instead, you should “get used to ties,” because they’re much more common than the outlandish uplifts.

Image via ExperimentEngine

 

“This backs the anecdotal feedback that most tests are flat,” says the company. “Which means getting one of those can’t believe it, incredible stories of increasing your conversion rate by 300% is a much less common occurrence than you’d be led to believe.”

When your site is terrible, the wins are larger and easier to attain. When you get to a certain point, the optimization becomes much more incremental – which isn’t a bad thing. Those incremental gains add up, and the net effect is similar to compound interest.

As Andrew Anderson put it,

“[optimization] is not sexy – doing optimization right means going away from what you think is important or what looks good...

...It’s about the grind of finding what matters most, giving it to users, and then continuing to go down the right paths. It means doing a thousand small things and not some big project or just some simple tweaks.”

Which leads me to my next point: conversion optimization is iterative.

3. Optimization Is Iterative

Iterative testing is the name of the game.

As Marie Polli, Senior Conversion Strategist at ConversionXL, says, “It’s highly unlikely that the first variation tested is the best option out there. For every hypothesis, there are a ton of different executions. They shouldn’t give up just because the first variation lost to the original.”

To illustrate, let’s say you create an experiment. Through some combination of heuristic analysis, analytics, and user feedback, it seems customers are concerned about the quality of the product and the lack of social proof. So you add some testimonials and run a test.

The result: no difference. So – does that mean your hypothesis was wrong?

There’s a chance, but the real question you should ask yourself is this: how many different ways are there to add social proof to a page? How many different ways could we design a treatment addressing social proof?

The answer: infinite.

No one has a crystal ball, so it takes iteration to reach results.

On the same note, those new to optimization often get super disappointed with inconclusive test results. However, it’s necessary to push through and realize there’s no silver bullet.

Image via FindingBetterAgencies

 

Brian Massey, co-founder of Conversion Sciences, says one of the biggest myths in optimization is that “If thine testing worketh not in the first months, thy site is not worthy to be optimized.”

In addition, he argues:

“Most nascent website optimization projects die a quick death in the first months when tests fail to deliver results. This is unfortunate for the companies that end up walking away, and very good for their competition.
The conclusion of nervous executives is, ‘Optimization doesn’t work for us,’ or ‘Our site is already awesome.’ My answers are, ‘It will and it’s not,’ in that order.”

A large part of optimization is the discovery of what actually matters. When you think of it that way, inconclusive tests are much more valuable than they first seem.

Andrew Anderson’s Discipline Based Testing Methodology similarly emphasizes testing for discovery. His approach really takes the pressure off of a first-test victory and places the value in discovery, efficiency, and different executions:

“What does “inconclusive” really mean? Is it just that you didn’t get the answer you were hoping for? Or does it mean that the thing you are testing has little to no influence?
Knowing something has little influence is incredibly valuable, so that is far from inconclusive...
...Does copy matter on this page? Well, if I have tested out a large beta range of 10 options, and they all fail to move the needle, then I can be pretty sure copy doesn’t matter. Likewise, if eight of them fail to move the needle but two do, that tells me it is the execution.”

Of course, if your test hypothesis is based on a guess or “let’s just try it,” then move on to testing something else. But if you’ve done the research, it might just take more patience and iteration.

Further reading: What Do You Do With Inconclusive A/B Test Results?

4. All Wins Are Perishable

A win today might regress to the mean tomorrow. Why? The world changes. As much as we’d like to control for all factors, you can’t predict how the weather, pop culture, social media, or public relations will change your conversion rates.

I love how Matt Gershoff, CEO of Conductrics, put it in the Digital Analytics Power Hour podcast: All Ideas Are Perishable.

That essentially means that optimization never ends (or as Brian Balfour put it, “growth is never done.”)

“The need for CRO is ongoing,” says Marie Polli. “As competitors change, the marketplace evolves and so do your customers needs. It’s ok to take a break every now and then, but the mindset should be set to make CRO part of the ongoing workflow in the company.”

You might also know this as Tactic Fatigue or the Law of Shitty Click-Throughs. Essentially, over time, growth tactics fade in effectiveness, so you always need to be looking for that edge. You always need to be experimenting.

Image via Brian Balfour

 

[Optimization] is not something you just do once,” according to Andrew Anderson. “It takes an ongoing attitude and a very strong discipline to do the right thing to get results. Do it right and it will be the leading driver of your company’s growth. Do it as just a test or two, and it will barely be a blip on your radar.”

Ton Wesseling once told me that one of the biggest mistakes a company can make is thinking optimization is more optional than other marketing spends. The perception is that it will only freeze ROI, where other acquisition channels (PPC, SEO, etc.) actively drop your traffic.

The thing is, optimization is a program – a discipline – not a tactic. If you invest in it for the long term, and build a strong culture of experimentation, you’ll reap the rewards.

5. Keep Calm and Test On

According to Marie Polli, “business owners are too hasty, which is understandable, but can result in a situation where tests get closed too early and false positives get implemented.”

Or as Judah Phillips put it, “like fine wine, conversion optimization takes some time to mature.”

We know that you want to move quickly. Optimizers want to move quickly, too. But math doesn’t bend itself to the demands of a marketing manager or a CEO (if only).

If you run a test poorly – stopping it early, for instance – you’re taking the very real risk that your data is inaccurate. And why are you running controlled experiments, anyway, if you don’t care about the integrity of the data?

A/B tests are notoriously fickle at the beginning. They change rapidly, so it’s not a good idea to call it quits before the numbers stable out.

When the results are wild at the beginning, do as Dr. Julia Freese of KonversionKraft said and “keep calm for now and continue testing.”

In addition, you can’t just stop a test when you hit statistical significance (which is a common misconception).

While not going too far into details, the big problem with only looking at significance is you’re not accounting for a representative sample.

Let me explain.

If you start a test on Monday and end it (with significance) on Friday, you didn’t account for weekend buyers and weekend buying habits, which might be much different than any given day of the week.

The correct way to go about it is to pre-calculate your minimum needed sample size (never a penalty for a bigger sample size). Then, test for one (better still two) full business cycles – so the sample would include all weekdays, weekends, various sources of traffic, your blog publishing schedules, newsletters, phases of the moon, what Donald Trump is tweeting about, and everything else that might influence the outcome.

Or if you want to read more in-depth on how long to run a test, read Ronny Kohavi’s brilliant answer on this Quora thread.

6. Distance Yourself from Opinion and Ego

When it comes to optimization (and much of life), ego is the enemy.

According to Marie Polli, one of the biggest impediments to client work is that, “they treat their website like it’s their baby.”

Likewise, “if they really want to give optimization a chance, they should take a step back and allow for numbers to determine which variation performs better.”

Image via EnricDurany

 

When you distance yourself from the idea of a specific outcome – even though I know you want variation B to win because your expertise tells you it will you open up the possibilities for a discovery based program that leads you places you might not have imagined. 

As Ton Wesseling, founder of Testing.Agency, says, “they are called experiments!”

He argues that clients should know the value of the long-term insights from experiments and use this as a great driver for product innovation and larger changes within the current business (models).

“Don't be rigid when experiments don't follow guidelines or design guides,” says Wesseling. “If we want to learn if a certain change creates impact, we should be able to change it out-of-the-box – or even disruptively.”

Conversion optimization is a powerful tool for your entire business if you let it be. As Matt Gershoff once told me, optimization is about “gathering information to inform decisions,” and in that way, it’s really more about decision optimization.

According to Stephen Pavlovich from Conversion.com, optimization represents a much larger scope if you can think outside of simple cosmetic changes:

“Conversion optimization isn't about putting lipstick on the pig – we're not tweaking layout and text. Testing is a hugely powerful tool that can answer fundamental questions to your business. At Conversion.com, we work with our clients to test everything from brand messaging to pricing, from product features to market potential.”

Paul Rouke, founder of PRWD, agrees, saying “Forget testing button colours – there is a full spectrum of testing possible which can impact your overall strategy and value proposition –iterative, innovative and strategic.

“Never underestimate the opportunity that conversion optimization has to impact your whole business,” he says.

This goes back to one of Andrew Anderson’s central tenets of testing for discovery: anything that limits the way you think about testing (namely opinions and ego), essentially limits the potential, effectiveness, and efficiency of the program at large.

In my experience, this is the hardest part of optimization. Sometimes the data tells us things we don’t want to hear. What separates mediocre optimization programs from the great ones is that the great programs value experimentation, where the mediocre ones are mired in politics and ego-protection.

Recommended reading: 6 Clever Nudges To Build a Culture of Experimentation

Conclusion

Conversion optimization is one of the most effective ways to grow your business.

As Paul Rouke said, “If you want to truly become a "customer-centric" business with huge growth potential, then intelligent, customer-insight-driven conversion optimisation is the only way you will effectively achieve this.”

While it’s still misunderstood by many people, if we bridge the gap in understanding, we all benefit:

  • Agencies and consultants are less frustrated.
  • Clients are less disappointed.
  • Optimization becomes more effective, which increases revenue.

We’re trending in the right direction, and have seen much more mainstream adoption of optimization in the last few years.

So if you’re planning on doing conversion optimization (or hiring conversion optimization services) in the future, keep some of the above things in mind to get more out of the program.


About The Author

Alex Birkett is a Growth Marketer at ConversionXL. He moved to Austin, Texas after graduating from the University of Wisconsin. Follow him on Twitter.