A/B Testing. Fundamentals and Tips
What is A/B Testing?
It is essentially a way of
validated learning by comparing two versions of some thing/object to find out which one performs better. A/B Testing was traditionally associated with website and app design. But it is now also being used extensively in the digital marketing space to compare various messages and campaigns and figure out which are most suitable for a particular market or target group.
How does A/B Testing Work?
You start A/B testing by deciding what you want to test. Suppose you want to determine the best size of the "Subscribe" button of a business website. Then you need to decide how will you evaluate the performance. That is, you need to decide on the
KPI you will use to judge the results. For example, your KPI or metric could be the number of visitors who click the subscribe button.
Now to run the test you should have and show 2 equal versions of the website where the only difference is the size of the subscribe button. You have to show these versions to people at random. Finally, you compare the results of the test using the metric, i.e., on which version of the website was the button was clicked most.
The Use and Rise of A/B Testing
The popularity of A/B Testing has risen dramatically as companies have realized that the online environment is well suited for it to help marketers. It helps them to answer questions like:
- "What will make my post viral?"
- "Who will buy our product after watching the ad?"
- "Who will subscribe to our service?"
A/B Testing is now used to evaluate many things, including website design, the design of online offers, headlines of product descriptions, etc. Most of the A/B experiments are run without the subjects even knowing it. This gives more accurate and unbiased results. A/B Testing is used extensively in digital marketing space where many marketing emails and ad campaigns are tested. For example, you might send 2 versions of an email to the customer list (random shuffling) and find out which is attracting more converts. You might as well run 2 different ad campaigns for the same product and watch the response of the markets where the campaigns were run.
Common Mistakes in A/B Testing. Pitfalls
Here are some common mistakes companies make while doing A/B Testing:
- Don't let the tests run their course (= not long enough): Most A/B software allows the user to watch results in real-time, which is entertaining. But for the results to be statistically reliable, you need to run a large number of instances.
- Looking at too many metrics at the same time: This can cause confusion and interaction and we may not interpret the results accurately.
- Not doing enough retesting: Testing once is sometimes not enough. If the result is very important, it is advisable to test again and see if results are the same. Perhaps the results changed due to an unknown factor.
⇒ If you can share something else on A/B testing, please react so we can add more aspects and tips. Thanks!
Sources: "HBR Guide to Data Analytics Basics for Managers: Fundamentals of A/B Testing", pp. 59-70