It’s a known fact that A/B testing is the way to increase conversion rate and gain insights into users’ behavior. And yet, many website owners and marketers are still on the hunt for A/B testing best practices and the next A/B test, which will bring maximum results.
Consider this blog post your own GPS in the world of A/B testing. Ready? Set? Read!
1. A/B Testing Best Practices Start with a Hypothesis
And where does this hypothesis come from? Data collection is a big part of it.
Aside from Google Analytics, heat-maps and user recordings can help – all these amplify the effectiveness of your hypotheses. After all, data, combined with usability testing can push for more logical conclusions.
Brainstorming and pitching in ideas is another way to come up with a hypothesis, whether data-driven or not. The ideas can come from anyone in your company, not necessarily your site performance managers or analysts.
What kind of ideas? Anything goes. From small color tweaks to large-scale funnel changes.
Need inspiration? Use these sites to find interesting ideas and data analysis techniques:
- Online Behavior – pretty much speaks for itself; great if you want to dig deeper into users’ behavior.
- Marketing Experiments – insights from the first Web-based research lab for marketing and sales.
- Behave – everything you need to know about conversion; has tons of conversion resources and exciting reads.
We believe that the synergy between data analysis and creative brainstorming, which includes putting yourself in the users’ shoes, is the secret sauce for improving site conversions and reaching business goals.
After you’re all set with the above, it’s time to decide on the KPIs, which will affirm or deny your hypothesis. When you do that, it’s best to remember the following:
- Set realistic KPIs – not every change will generate a 100% uplift
- There can be more than one KPI per test. Your KPIs can be, for example, number of leads, number of qualified sales, number of abandoned shopping carts, and so on – all depends on the objectives of your business.
You can even assign different values to them, setting major and minor KPIs.
So far so good, but what if your site does not drive enough traffic?
It is a pickle! It will be hard for you to run A/B tests, although not completely impossible. You’re probably wondering what’s the ‘right amount of traffic’?
Check it regarding conversions – some recommend sticking to a minimum of 400 conversions per variation or use A/B test calculators. If you’re going with the latter, our experts advise using Evan Miller’s calculator. Whichever option you pick, it’s going to be quite helpful in giving you the sample size estimation needed for your test.
Since there’s no magic number regarding the amount of traffic that can be qualified for A/B testing, it is always better to test on as many relevant users as you can, and for that, you need to learn to adjust.
How? In the case of small-scale websites, we suggest running tests on aggregated pages with the same layout and purpose as well as consider the scope of the effect a change is to have on a site.
2. You’ve Gathered All Your Data & Ideas. It’s Time to Prioritize
You’ll probably have an extensive list of A/B testing ideas on your table.
Find a prioritization methodology that’s going to work best for you. We use the PIE model:
- Potential – what’s the possible improvement for the pages? If you reached this stage, all pages you’re thinking to test have potential, but because you can’t just go A/B testing left and right – assess the potential to identify the most effective growth paths.
- Importance/Impact – What’s going to be the impact if your A/B test works?
- Ease – or the logistics of your A/B testing process: how difficult/easy it is to get your test running; Who are the people involved? What kind of resources do you need?
It’s best to assign scores to each of your A/B testing ideas for each of the above elements. You can rate it from 1 to 10: 1 – the idea with the most potential/importance / the easiest; 10 – the other way around. That way, it will be easier to decide on the A/B testing order.
Before running a test, make sure you pick an A/B testing platform that will be the best and easiest to operate for you. Here are the top three A/B testing platforms we recommend using:
3. When Running a Test
- Never stop the test until your results are finalized, and try not to make assumptions about which direction the test is going to go.
- Don’t make drastic changes to the page you’re testing, as it will affect user experience and users’ decision making.
- Keep relevant stakeholders in the loop – this is a great tactic we like to follow at Webpals® Group. Doing this will leave less room for uncertainties, and more people are following the process.
- Don’t forget about statistical significance – for example, for each test performed at Webpals® Group; we aim to have a small margin for mistakes. Therefore, we set the statistical significance to be at 95%, no matter the size of a site.
4. So, the Test is Completed, Now What?
It’s time to harvest the fruits of your labor and thoroughly analyze your results.
In case your A/B test has proved to be a success, update the website with the desired variation and go live! And, continue keeping your stakeholders in the loop to guarantee even more collaboration when you run more tests in the future.
If we were to put the entire A/B testing cycle into an infographic, it would look something like this:
You want your A/B tests to be successful. So, what’s a good test?
In the words of one of our marketing analysts: “A good test is the one that inspires you to run more tests, emphasizes your value proposition, improves your understanding of users’ behavior and achieves your company’s objectives.”
That’s what we do with A/B testing – improve and get ahead, and given the fact that you’re reading this – we’re on the same page:)
Now that you got to the end of the blog post, it’s time to put these methods to the test! If you are up for running tests at Webpals® Group, we got lots of job opportunities! Good luck!