You’ve implemented a testing tool. You’ve launched a few tests. Results have come in. Improvements have been made. So, the program is a success, right? Maybe, but just as you wouldn’t launch a new site feature without testing into it—or even measuring it—you shouldn’t let your testing program run without monitoring.
How can we calculate the success of a testing program? The most obvious metric is win rate. Simply, it is a measure the number of tests that produce an increase in conversions, page views, clicks or some other KPI defined in the test objective. Not every test will win, by increasing this number shows that you are learning about your audience and customers—and developing smart solutions to better serve their needs.
Win rate, then, is an essential indicator of overall program success. But taken at a more granular level, win rate is more a representation of test ideation than it is program process or efficiency—and these are two elements that must be measured to be effectively managed.
To monitor efficiency, it’s important to keep an eye on test velocity. This metric is a measure of the number of tests you launch, calculated either by week or month as appropriate. Increasing velocity is especially important for new testing programs—which may struggle getting tests from the white board brainstorm to a full, successful launch.
The Forgotten KPI?
To place an emphasis on successful launches, a third KPI is needed. The best metric for monitoring program quality is the bust rate. This is a measure of the number of tests that broke, failed to properly collect data, interfered with an existing site element, or in some other way created a problem that prevented the test from continuing and had to be addressed immediately. A high number of “busts” is an indication that there are gaps in the quality assurance process—and the testing process in general.
Whether your testing program is big or small, new or mature, this is a critical number to watch. Even here, at Brooks Bell, we monitor busted tests weekly—celebrating the zeros and analyzing the occasional mistake, looking for opportunities for improvement and lessons to be learned.
Increasing the win rate is clearly a top priority, but focusing on this number exclusively may lead to confounding performance plateaus. A robust testing program with consistent growth needs a more holistic plan for monitoring success. These three metrics are a great place to start—and working to improve all three will lead to faster, more consistent, and more successful tests.