A/B testing is a popular technique used by digital marketers to convert customers. A slight tweak in copy or imagery on a landing page can make a significant impact on the types of actions your customers perform. But product managers and entrepreneurs also leverage this technique to validate or invalidate experiments on their products and services. It’s a powerful method to make data-backed decisions when building your value proposition, but it’s also incredibly easy to get lost in a rabbit hole of A/B tests if you focus on the wrong conditions or techniques. In this post, I’ll share a basic primer on A/B testing--how it works, and some examples of how they can be performed.
Eric Ries, author of The Lean Startup, describes A/B testing as ‘an experiment in which different versions of a product are offered to customers at the same time.’ Online news sites like The Huffington Post use A/B testing to observe which headlines perform better when they publish new content. The headline that generates the most traffic within a specific period of time gets chosen as the permanent title for that post. At Strategyzer, this technique has helped us navigate through product features and build things customer wanted instead of relying on our own guesses.
The goal is not to build two different products when performing an A/B test. What you’re really doing is creating near identical versions of your MVP (Minimal Viable Product) that only differ by one or two variables, and measuring which alternative performs the best. See the subtle difference in each version of my “product” below?
You can A/B test features, content, packaging, pricing and other aspects of your value proposition and business model. But it’s important to remember that A/B testing is not a one time implementation. Your first test may not deliver the results or insights to make a sound decision. You’ll most likely have to run multiple tests to validate or invalidate your hypothesis, and then use those learnings to improve the next experiment.
Follow these steps to conduct your A/B test
1. Test your most critical hypotheses first. Let’s say you’re testing customer interest in a new diet program. You should start by finding out if customers even want to diet before testing features like free delivery. Consider the underlying hypotheses that you must verify for your business idea to work. Which hypothesis are you testing and what are you trying to learn from this experiment? What metrics will you set to verify your hypothesis and how reliable is the data that you’ll get? You can organize your tests with our Test Card--a simple tool to record your hypothesis, metrics, and potential calls-to-action. Once you have designed your experiment, you can build your MVP or use Optimizely, Visual Website Optimizer or other online A/B testing tools to set up a landing page MVP.
2. Pay attention to all the variables around your hypothesis. Your diet program might generate substantial interest right after a food filled holiday like Thanksgiving, but this doesn’t mean that interest will be sustained throughout the rest of the year. The purpose of an A/B test is to simulate a real-life scenario and measure which product alternative performs best in that environment. Try to emulate all the conditions that you’d find in a real-life situation to get accurate data. If you are testing multiple variables in your A/B tests for e.g. different pricing and different names, make sure they don't interfere and that you can obtain an accurate learning from your experiment.
3. Develop an MVP with a prominent call-to-action. The effectiveness of each MVP is measured against a call-to-action. e.g. the number of people that signed up to get more information on the product, downloaded a file, clicked on a ‘Purchase’ button, or completed a certain task. How customers interact with your call-to-action will determine if your hypothesis was validated or invalidated. For example, your MVP could be a simple landing page that describes your diet program’s value proposition. The call-to-action could be a simple form that collects sign-ups from your customers for future communications around your diet service. Choose call-to-actions that are relevant with the hypothesis you’re testing e.g. a high number of sign-ups (call-to-action) doesn’t reflect customers’ willingness to pay (hypothesis).
4. Record what you’ve observed against your initial hypothesis. What data did you retrieve and what can you learn from it? How did the other alternatives perform? Is the data you extracted reliable? Can you make data-backed decisions as a result of this experiment, or should you run a second round of tests to confirm your findings? Imagine that your option A presents a $10 vegetable meal and option B presents a $20 protein meal. If customers choose option A, is it because the meal is cheaper or because they have a preference for vegetables? You’ll need to conduct more experiments to figure this out. We developed the Learning Card tool as a simple way to gather the interactions that occurred during our A/B tests. Feel free to use ours for your observations, too:
A/B tests are a great technique to encourage a culture of frequent experimentation, inexpensive and manageable failure, and learning. Continually running these types of tests will allow your team to understand what versions of a product, service, or feature creates the most value for your customers.
It's your turn to A/B test now!
What are some interesting A/B tests you’ve implemented at your company?