Simply put, the ability to perform A/B testing in an automated fashion is one of the most strategic differentiators that DevOps organizations possess over their competitors. A/B testing is also one of the least known concepts among organizations looking at possibly attempting or considering DevOps.
What is A/B Testing?
The basic concept of A/B testing is that it is a way to compare two versions of something to figure out which performs better. A/B Testing is most often associated with websites and mobile apps, the reality is that A/B Testing has been around for almost 100 years. A/B Testing was introduced by a 1920s statistician and biologist named Ronald Fisher who is credited with discovering the key principles behind A/B testing and randomized controlled experiments. Others ran these experiments before Fischer, but he is the one who figured out the basic principles and mathematics that made it a science.
How does A/B Testing Work?
To use A/B Testing you first have to determine what you wish to test. An example would be whether users prefer clicking a button labeled Send or a link text labeled Send. Then using one of the DevOps release strategies to put the experiment into production or maybe an internal user’s environment where you would then evaluate the performance of each. In this example, we want to see which option visitors to a web site click most often. You run the test by selecting two sets of randomized users and then showing them based on random selection either the button or link. From the analytical data, we determine which option users preferred. It is important to remember that other factors could influence the results, so this is why randomizing which users are in which group, you minimize the chances that other factors will skew your results.
What Does A/B Testing Report?
In most organizations, some form of software will be used to automatically handle the calculations and often there will be a statistician who handles interpreting the results. A/B Tests will generally report out two conversion rates, the first for users who saw the control version, and the other for users who saw the test version. Typical conversion rates measure clicks and other user actions.
Mistakes Made in A/B Testing
The main error made when doing A/B Testing is being too eager and not letting the tests run for a sufficient length. Many of the A/B Testing software products actually let organizations watch the test results in real time, which leads to the urge to make decisions too quickly. This over eagerness has been feed by the A/B software vendors who have offered a type of A/B testing called real-time optimization, which allows organizations to use algorithms to make adjustments as the results come in. This causes a problem in that, due to the randomization, it highly possible that if you let the test run its full length, you quite possibly could get a different result.