We have updated our Privacy Policy and Privacy Options

Got It

Running A/A Tests is a Waste of Time and Money

Share

Lately, I’ve heard a lot of chatter about the best practice of running an A/A test before jumping into your A/B tests. The theory behind this practice is that by running an A/A test to start, you can make sure that your testing tool is accurately splitting traffic and reporting on actions. When running an A/A test, you hope to see relatively the same results in each variation, thus confirming your tool is working properly. That sounds like a safe, great practice right? From my perspective, this is a complete waste of time and resources. Here’s why.

Money going down a sink drain

Running A/A tests is a waste of time

You will be tempted to run an A/A test for just a few days and then jump into your testing program. And that temptation is easy to fall prey to. You see a few days of data and see that they seem to be trending similarly. Why not just pull it and start running your next test? The problem is that, like any tests, you need to have a minimum amount of conversion success and time for the data to normalize before declaring a completed test. In some cases that means at least 100 conversions for each variation and probably at least a week and a half to two weeks for your data to normalize. That time could be better used gaining learnings and lifting conversions, right?

Pick the testing tool that’s right for you: Download these 20 essential questions!

Running A/A tests is a waste of money

Of course I could lead with the old adage “time is money” but there is an even deeper cost issue when it comes to running A/A tests. Your testing tool isn’t free. And your goal is to show an ROI on your testing program. A testing program where the testing tool is a big part of that “investment.” Do you really want to pay for Adobe Test&Target mBox calls to simply verify that it’s working? And for flat fee testing tools like Monetate, is it worth spending two weeks of your flat fee to confirm it is working properly?

The Solution

If A/A tests are such a bad idea, why do so many people suggest doing it? From a pure testing experiment standpoint, running A/A tests make perfect sense. But very few organizations have the luxury of producing a pure testing environment devoid of any budget, ROI, sales numbers consideration. For example, I would like to take my client’s three million unique monthly visitors and message each one of them personally. Just because it makes sense doesn’t mean it’s cost effective or manageable. Online retailers also operate within promotional time periods, so spending two weeks on an A/A test would prevent the actual A/B test from happening in time.

At Brooks Bell we understand the need for clean data and accurate testing protocol. To deliver this while also delivering quicker conversion-lifting tests, we built in a multi-step QA process. This process includes multiple people (to negate a single point of failure) acting on the conversion point in a live test environment, and confirmation of the conversion being tracked in a live data environment. The best part about this QA process is that we can verify the validity of the test setup in a matter of hours, not weeks. Since we incorporated these steps into our own process, we have dramatically increased our success rate. That means our clients are getting real tests out the door quicker.

A Case for A/A/B Tests

Let’s say that you still can’t let go of the idea of an A/A test. Again, it is a great practice, just not a practical one. If you have the budget and the traffic, a great alternative is to run A/A/B tests. By running an A/A/B test, you can get the validation in a live testing environment that everything is working as it should be. And you can run your real test at the same time.

Chime In

Your turn: How does your organization approach this issue? Do you run A/A tests a lot? Not at all? Do you run A/A/B tests? Let us know in the comments.