For the past two years, we’ve been working with Cameron Corda and his team to optimize our MuckFest® MS event series website. In this post, he’ll give us an introduction to the A/B testing process and insights into how we’ve applied it.
A process for improvement
How many times have you been in a meeting where the process goes something like this?
● There are lots of ideas in the room, commonly beginning with “I think this might work…”
● The decision makers select their favorites, they get implemented, and everyone hopes they work.
Maybe you have some data to support whether things are getting better or worse, but it’s often difficult to separate general trends from the specific impact of your changes.
By having a process rooted in A/B testing, we know specifically when things improve, and by how much. Culturally, we find that people are more apt to suggest ideas, and as a team we’re more apt to try things out. Everyone is on equal footing and the results are an agreed upon standard that trump opinion.
What is A/B testing?
A/B testing is a process where you experiment with ideas you think will produce a positive impact. Typically you test your original baseline against a new variation, referred to as “A” and “B.” You randomly send the variations to users, monitor outcomes, and compare the results to see if your new version outperforms the original.
You’re not limited to two variations, and you can even test multiple variables in what is known as a multivariate test. The more options or more specific your goal, the more traffic you need to get results. If you’re Google, you can test 41 shades of blue. For the rest of us, tests need to be more dramatic to actually get results.
To run the tests, we use software from Optimizely.
A specific example from MuckFest MS
Let’s take a look at a successful test, and the process we followed to get there. We start with the hypothesis:
MuckFest MS’ fight against Multiple Sclerosis is a key differentiator from other mud races. By emphasizing the cause through a testimonial and language, we’ll increase the likelihood that new visitors to the site continue at all steps of the funnel.
Here are screenshots of the original and our variation:
Cause Emphasis (B)
To measure success, we document specific goals for each test:
● Primary goal: Visits a specific city page
● Secondary goals: Visits the registration page or Completes registration
Why is the primary goal to visit a specific city page? Our registration process has a clear funnel that starts with visiting a specific city page, and continues through the various steps in the registration process. The advantage of testing higher up the funnel is that you have more conversions, and can more quickly determine if your test is an improvement
We then launched the test, let it run, and kept an eye on the numbers. Here were the results to the conversion rate, along with the “chance to beat baseline,” a measure of statistical significance.
● Views of a city page: +6.6% (99.5% chance to beat baseline)
● Visits checkout page: +10.4% (83.5% chance to beat baseline)
● Revenue: +22% (81.0% chance to beat baseline)
When a test is over, win or lose, we document the test for future team members. It’s important that each report have screenshots so that future team members who may not know how the site used to look can still get value from the learnings.
What we’ve learned
Oftentimes, tests confirm our ideas were good ones, but we’ve also been humbled to find out that ideas can sometimes lead to a decline. We’ve also been surprised that ideas we otherwise would have dismissed in fact had positive results.
As we wind down the 2014 MuckFest MS season and look towards 2015, we have a growing list of learnings to build on, and a constant pool of new experiments to try.
Cameron (@ccorda) is the Founder of Juking the Stats, a web development studio focused on helping non-profits build and optimize web applications and sites. Previously Cameron has worked on the development of Whitehouse.gov, President Obama’s re-election website, and the It Gets Better Project.