images
images

For any business to run effectively, a major part is its marketing analysis and research which many entrepreneurs prone to miss out on. The tendency of mere selling without factoring in what is working and what is not determines the site’s growth. It blocks the deeper requirement for intensive market research, and as a result, in the long run, you lose customers. The best way to conduct client research is by split testing.

Why split test?

Split testing or A/B testing, by definition, is a method of conducting controlled yet random experiments with the definitive aim of improving website traffic. To make it simpler, it is the comparison of two versions of the same page to determine which is more effective. Ideally, there will be just one or two differences between the two versions to pinpoint the effectiveness accurately.

Common actions like content, clicks, form completion and purchases are analyzed to see what variant delivers better results to a pre-defined marketing objective. The common marketing methodologies used are signup forms, registration pages, call to action buttons, or a redirect to a different page. It is proven that even as small a change as updating one word in the CTA has helped increase conversions by a whopping 77%. This proves the importance of doing split testing to improve your marketing campaigns’ efficacy.

As enticing as it might sound, sometimes marketers are seen likely complaining about getting false negative analytics or couldn’t even generate adequate data. If you are fazed by split testing, chances are that you might be conducting it the wrong way. Here is a list of the 10 most common mistakes that testers are likely to make along with their possible fixes.

Mistakes and fixes worth knowing about

There can be a multitude of errors when a tester tries to comprehend a split analysis. The most common ones are jotted below.

1. Arbitrary Testing

The biggest problem that most testers are bound to make is conducting a split test without a reason. For instance, you might be apprehensive about testing the size of the ‘call to action’ button, in which case you can design the variants with a specific focus. If you are conducting a split test just for the sake of it then please refrain as you are about to get doomed.

The Fix:  Use heat map software to discover the potential areas that aren’t focused much or aren’t garnering much traffic. Conduct a split test and form a hypothesis first. Now conduct the trial and make sure to conduct it for the proper phase of time, compare the new heat map data sets and analyze. Keep repeating until you get satisfactory results.

2. Calling off the test early

This is a big rookie mistake that most testers are bound to make. Let’s assume that your site conducts high traffic and hence within 3 days of the split test you generate 98% confidence and around 250 conversions per variation and you pull off the test. Here is where your test gets a false positive result as you have not taken into account the seasonality parameter, and even which day of the week you conduct the test can lead to significant variation in the test curve.

The Fix: Another important parameter for any statistical analysis is the sample size taken. For getting proper results make sure to take your sample size large enough viz. 100 or 1000 conversions. Too small a sample size taken can lead to wrong implications.

3. Multi-element testing

Website heat mapping analysis might suggest that there is more than one area that needs focus however conducting multiple tests at the same time ends up nowhere.

It is always better to opt for the split test instead of a multivariate one. This is because when you are conducting a split test on two pages having differences in only one parameter it is easy to judge which page is working better.

The Fix: However in the case of multivariate testing, assume that you are testing four different web pages with two or three parameters different. In that case, when you garner the data you cannot fix the actual deciding factor. On such occasions, you have to compare the data from all pages and also analyze the correlation between different parameters.

4. Focusing only on traffic conversions

When you are testing for a particular or a couple of parameters make sure to be deep rooted instead of just thinking short-term. This means if you see certain changes are gathering more traffic on your site, don’t get all complacent about it. If the high traffic comprises low-quality customers then this eventually might yield negative results for your business.

The Fix: So whenever you are opting for a split test check your conversion metric then correlate that traffic with actual leads and see how many generate potential customers.

5. Opting for a random hypothesis or just blindly following split test practices

A statistical test has no significance without a proper hypothesis. So, before even wasting your time conducting a test make sure that you have a proper hypothesis. In case you are unsure of the credibility of your hypothesis go for market research, analyze the results for your chosen variable, check your competitors’ strategy, and know your targeted customers.

The Fix: Also while checking for competitors’ strategies make sure not to opt for blindly following them. What worked for someone else isn’t going to work for you. Check your competitors’ strategies but at the same time know your own USPs and strategize accordingly.

6. Eliminating the confound variables

The confounding variables are those elements that are not part of your significant hypothesis parameters and are likely to mess up your test results. Some of these include new product introductions, marketing campaign launches, and website redesign.

This generally happens when you change some test parameters in the middle of the test to generate a more significant variation. This might generate traffic from places outside your potential target pool.

The Fix: When performing a split test make sure to eliminate such confounding variables, and make sure that the rest of the factors remain constant throughout the test.

7. Testing only incremental changes

There is a significant difference between how large websites operate and how the rest of the small entrepreneurs need to deal with it. For large websites, some minute incremental changes can lead to generating big ROI. But for startups and smaller companies, this activity may not yield the expected outcomes. For example, it’s not quite feasible to test the website’s or the CTA button color and its various shades, as it is going to add very little to the overall site facelift.

The Fix: Split tests provide for minute improvements but that won’t give significant turnover in the case of small businesses. So don’t get focused only on incremental changes, instead, focus on huge performance lifts. What will be needed is to go for a radical change at an overall level. This is more intensive than a narrow A/B testing technique. This may entail a major page redesign which can call for substantial efforts. It is also important to note that because of the multiple elements being re-designed, it might be difficult to note which particular element has resulted in a spike in traffic after the re-designed page is live.

8. Doing split test even without any traffic

If you are running the business for only a couple of months, it is recommended to achieve higher traffic, before you start running a split test. For startups and new ventures, performing split tests with a few beta users will be ineffective. The testing of different hypotheses is a game of statistical significance achieved by an optimum sample. But if you don’t have an adequate sample, this underlying objective will not be fulfilled.

The Fix: Go for split testing only when you have met 3 distinct conditions

1- You have an adequate representative sample

The testing needs to run for a 3-4 weeks duration to cater to diverse sales periods. Calling off the testing before that will not depict universal testing, rather it will show a selective/cyclic testing outcome.

2- You have a sufficient sample size

A website with more than 1000 transactions (leads, signups, or subscriptions) can do one A/B test in a month to improve traffic. Factors like conversion rate, total visits, and the overall transaction will determine how much should be the adequate sample size.

3- When you achieve a p-value

Once points # 2 and 3 are met, it is important to look at the p-value (as per a common misconception p-value is not the probability of B being greater than A). Typically, an acceptable level of significance is 5% (or 1 out of 20 times, the sample will show an extreme result given that the null hypothesis is true)

9. Tests not run for full weeks

For split tests to yield proper results, seasonality, weekly, and even diurnal parameters matter. So the time taken period for the split test should be carefully chosen. Not factoring in full weeks test will skew the results which might show the correct picture of the result of the testing

The Fix: For instance, if you are an eCommerce site owner, then on weekends your targeted population is more inclined to your products whereas on Mondays or rush hours of weekdays your site might not generate proper traffic. So, make sure that you end the test on the same day as the beginning. This ensures that you test a full week in one iteration. This will also align with our earlier recommendation – running tests for 2-3 week duration

10. Test data not sent to Google Analytics

Conversion metrics generally show average data and in the world of marketing averages lie. You can never get a full picture if you just work with percentage data. This is because time, seasonality, diurnal phases, and many other factors contribute to a proper customer graph.

The Fix: So if you have catered a significant amount of data send it to Google Analytics. Run advanced segment and custom reports. The results will show you the path to run advanced tests and you might get an idea of where to take your site testing from there. Utilizing the new GA features allows marketers to run up to 20 concurrent A/B test data analyzed. Make sure to use a distinct Custom Dimension (or Custom Variable in GA classic mode) for one active experiment. Tools like Optimizely Classic can help in this case.

To sum it up

Marketing and business analytics are pretty complex in their own ways. So instead of checking what others are doing or webbing the most common testing strategies, go the smart route and adopt the cost-effective A/B testing to boost ROI from your marketing strategies.