A/B testing also known as split testing or bucket testing is a fantastic way to test two versions of a webpage or an app before launching one of them. The developers want to understand the end users’ reaction to both the versions so they can modify things and launch the popular one. Data like page views, conversion and other statistics are the variables that are used in the A/B testing. This surely must spell success but even after all this there can be errors. Let’s have a look at some of these common mistakes and understand how they can be avoided.
1. Testing early on in the process
Deciding whether A is better than B is based on statistical analysis. If you test the two versions early on in the process, the results you will get are inaccurate. You might be tempted to decide in the favor of the version that has got 75% positive results but experience will tell you that it is not reason enough to decide. There have been instances where the reality check is proved wrong after this kind of weight age in favor of one. The only way the A/B test will give accurate result is when you reach the truth regarding popularity of the versions.
2. Duration of test is not completed
Any test has to be run for its full term. Even if the test is done on a high traffic site, you cannot decide between A and B depending on the first few days of results. Even if your expected goals are achieved you cannot stop a test mid week. It has to run for a full week until you find a winning combination fro you never know some audience might check in after you stop the test and change the results completely.In any case, the conversion rate is dependent on the day of the week therefore running a test for the full week takes care of the ‘seasonality’ factor.
3. Testing with inadequate traffic
If you run the tests when there is not adequate traffic on your site, the results that you get cannot be believed. For instance, if you have made 2-3 sales in a month and with your version B you have got 4 hits, you might think it is better to run the test for a longer duration of time to get the statistical guarantee. With inadequate traffic on the site, this test result will be time consuming and ultimately show on the revenue. Instead it might be better to shift to B in the first place.
4. Test designed for anything or nothing
Testing should not be done just for the sake of it. The test itself might not have been designed to guarantee results that can be quantified and used by the company. A random test where you do not have an idea of how the results will benefit the company will not help the business anyway.
5. Testing the obvious
There are some aspects of your product or site which are universally accepted facts. They have been proved times and again so there is no need to waste time and money trying to prove them for your company. These should be integrated in the design already and the test should be on aspects that are critical to your company or have been customized by you in order to understand the customer response.
6. Test A/B at different times
The results of A/B test is not valid if they are carried out at different times. This is because there are too many variables in a test that change with time. The audience and its requirement for instance will change with time so you will not be able to decided between A and B. It is advisable to divide the audience into two and run the two versions at the same time.
7. Not segregating the variables
Not being able to differentiate between behavioral pattern of the basic variables can lead to disaster. For instance when a test is conducted there will be some first time users and some returning users. The feedback that you will get from the former is different from that what you will get from the later. Though both need to be taken into account, the data from both has a different kind of role to play in the outcome of the product so you need to segregate them and collect data.