We have updated our Privacy Policy and Privacy Options

Got It

Making Sense of Unexpectedly Flat Test Results


When a test variation represents a huge, bold change, launching it is exciting. Win or lose, it’s almost certain such a test will produce a huge impact—unless, of course, it doesn’t. Odd as it may seem, sometimes even big tests produce a flat result.

Define better success metrics. Download our testing ideation white paper!

But not every flat result is the same. To find out why a test failed to produce a clear change, you have to dive in and look for other variables not initially accounted for. Here are some suggestions for places to start.

Search for Errors

flat testsOne thing worth double-checking when a test turns out flat—especially when a dramatic change comes up flat—is whether or not there are any errors influencing results. These errors may have existed in the test code or campaign setup—which is why it’s so important to do a thorough QA before launching a test—but there could also be problems with or un-optimized elements of the site itself. Fixing these would be a priority before launching another test.

Dig Into Data

Sometimes a test is flat—but only for the success metric. Knowing whether a variation had an impact in some other way—perhaps to a secondary KPI—is important and can help inform future test ideas. Perhaps adding pricing was meant to increase conversions, but had no impact. Could this have been because visitors were digging into other parts of the site for more feature information? Perhaps exit rates increased because visitors left to do some price shopping elsewhere. Or maybe the variation effected a different variable like average order value.

Identifying this unexpected result shows that the test was only flat in one narrow sense—and reveals some valuable information about the behavior of visitors, too.

Explore Segments

Evaluating results by segment, too, can uncover hidden insights from a flat test. A variation could, for example, have a bigger impact on returning visitors than it does on new visitors. If new visitor traffic is significantly larger, it could obscure a test’s effect without segmentation. This is why segmenting results is a smart approach to secondary analysis when a test comes up flat.

Flat results are frustrating, especially when dramatic variations are tested. But a flat result is not worthless. Diving into the data and rethinking the approach can still gain useful insight. It might not flip a test into a big winner—or reveal an obvious change worth implementing—but it may inspire future test ideas.