It’s Matt, the A/B testing guy from Thrive Themes
I’m teaching you why your A/B tests never seem to bear tasty optimization fruit.
Here is a great email about A/B testing I recieved today!
I have no promotional links to this company .
I really liked the video too.
“Real knowledge is to know the extent of one’s ignorance.”
– Confucius (551- 479 BC)
So here’s the deal.
When people run A/B tests:
They either test inconsequential design elements with low optimization potential (like button color)…
– OR –
…they freak out and end their tests too early if variations start to underperform.
Maybe you’ve had the experience of starting a test, but then saw your variation under-performing like crazy just a few days later…and I’m talking scary numbers like 30% less conversions!
You feel your palms get clammy as you start imagining losing 30% of your new subscribers! “What have I done!!!” you scream…
Keep calm and take a breath buck-a-roo. It’s not as bad as you think.
Your irrational human emotions are kicking in and pushing you to do something foolish by ending your conversion optimization experiment much too soon.
Do yourself a favor and burn the following realization into your brain:
Unless your test achieves high statistical significance, there’s no guarantee the results are real.
Just hang with me for another second as I break this down:
Pretend you’re running an A/B test on a sales page.
You quickly modify the value proposition between your control and test variation to get your split test up and running fast.
RAPID IMPLEMENTATION ALERT…well done! 😉
Your control starts off strong, but then its conversion rates crash the day after launch! Your test variation not only takes the lead, but holds it over the next few days.
Well, no need to keep losing so many sales by letting this stupid test continue, right?
Because you’re a savvy solopreneur, you decide to collect more data before throwing in the towel. You make the right call and let this A/B test ride.
And guess what? Something crazy happens just a few days later.
The conversion rates between the 2 test variations flip-flop!
That’s right, the control actually takes back the lead and absolutely crushes the variation every day until you stop the test. Check it out:
I know this example looks like an anomaly, but it’s not.
The start of many A/B tests are cursed by random data signals. Your only recourse to avoid being fooled by such random chance is to let your tests run until they achieve statistical significance.
An A/B test’s “Chance to beat original” percentage (usually referred to as its statistical significance) basically asks:
“How sure can we be that the difference in conversion rates we’re seeing is not random chance?”
Over here at Thrive Themes HQ, we recommend waiting for a minimum “Chance to beat original” of 95% before declaring your variation the winner.
On the flip side, you can declare your control the winner if the variation’s “Chance to beat original” (a.k.a. Statistical significance) hits 5% or less.
I know this sounds a bit complex, but think of it this way…
95% statistical significance basically means you’ll get the same result if you repeat the exact same A/B test 95 out of a 100 times.
The closer this “Chance to beat original” is to 100% (or 0% if the control wins), the more confident you can be that your test result is real.
If you don’t keep your itchy A/B testing trigger finger calm, you can actually do massive harm to your long-term conversions by picking losers prematurely.
DON’T DO THAT!
Want to know how the A/B testing pros sidestep this conversion optimization pitfall?
They completely eliminate their irrational emotions from the decision making process.
Split testing pros only use testing tools that have set-and-forget automatic winner settings.
This feature short-circuits irrational emotions during crazy A/B testing data swings by allowing you to start a test, walk away from it and let the tool pick your winners once statistical significance emerges.
(Shameless plug: Automatic winner settings come standard on all the Thrive Themes A/B testing tools – including Thrive Optimize)
The point is:
Don’t end your tests too soon or you may never experience the powerful conversion optimization benefits of A/B testing.
Just use the following quick checklist to make sure you avoid picking random losers in all your future tests:
Before ending any A/B test, answer YES to the following 3 questions:
#1) Is the “Chance to beat original” (a.k.a. Statistical Significance) greater than 95% (variation wins) OR less than 5% (control wins)?
#2) Did the test achieve at least 100 conversions?
#3) Did you let the test run for at least 2 weeks?
By the way, watch WP Engine’s founder Jason Cohen (and fellow A/B testing geek) show you the importance of statistical significance in this awesome presentation he gave back in 2012.
Always be testing my friend,
Matt the A/B Testing Tortoise
(Split testing is a marathon, not a sprint!)