We have updated our Privacy Policy and Privacy Options

Got It Button arrow

Five Telltale Signs of a Broken A/B Test

Share

Not every test is a winner but how can you tell the difference between a solid experience that isn’t supporting your initial hypothesis and a broken experience that may be producing unreliable results? Here are a few situations where you may want to pause that test and check under the hood.

You notice a sudden change in the data

If your analyst monitors daily trends and notices a sudden drop in the primary metric being tracked, take a quick look at the live test experience to ensure that nothing changed. There may have been a modification to the default site that impacted the test visually or functionally. If your testing tool allows, consider setting up an alert that notifies you when a major change in trends occur so you can make fixes as soon as an issue arises.

Inconsistent data coming from a particular audience

Is the test doing well for some but not all users? Try to dig into the data to see if there’s a particular browser or device where the data isn’t trending well. You may need to have your developer put in a quick fix or exclude that audience from the test altogether. To prevent issues like this in future tests, try to QA the test in all the most common browsers and devices used by your customers. If you don’t have a wide variety of devices at your disposal, a service like crossbrowsertesting.com should help you cover your bases.

Very low engagement 

Seeing a surprisingly low number of click-throughs? That’s a good time to click through the tracked elements yourself and monitor the analytics calls. You may find that the metrics aren’t firing in all the right scenarios or maybe firing inconsistently.

Very high engagement 

If the results look too good to be true – they might not be true! You might find that the elements are being tracked incorrectly, creating misleading results. Be sure to have your Quality Assurance Specialist test all engagement tracking metrics during their pre- and post-launch QA to catch these issues well in advance.

Feedback from customer service calls

This may be the least desirable way of finding issues with a test. However, direct customer feedback can help you track down very specific issues that may not reveal themselves in the test data. We recommend having an experienced Quality Assurance Specialist thoroughly check the test before and shortly after launch in order to mitigate the chances of these trickier-to-catch issues being found by customers.

If your company wants to take it a step further and monitor the customer experience in real-time, products like FullStory, ContentSquare, and Quantum Metric offer session replays that allow you to see exactly how customers are interacting with your experiments.

Even with the most rigorous QA process, you may occasionally see issues arise during launch or even midway through a test. As long as your company has the right tools and monitors the tests consistently, you’ll be able to develop and maintain high-quality experiments.