There’s No Such Thing as a Failed Test

What you’ll get from this post: An understanding of why you need to switch your testing focus from “let’s identify the winner” to “let’s learn and iterate” and why that’s important.

Estimated reading time: 4 minutes, 50 seconds; approximately 890 words.

failed_330x220

This guest post was written by Marina Rakhlin, a product manager at Monetate, a leader in cloud-based testing, email optimization and personalization software.

When it comes to testing, there’s pressure to find a winner. It’s what your boss is looking for, it’s what vendors are promising, and it’s what generates praise from the growth-hacker community. But all of this pressure has led to an unintended consequence: that a test not deemed a winner is instead deemed a failure.

There should be no such thing.

Even tests that signal negative impact on conversion rate or seem to not have moved the needle on your site are valuable and should be used as a learning experience.

While everyone is looking for a marketing solution that will make their life easier, “easier” should apply to the ease of making business decisions based on the higher quality of information you get from your testing efforts. You’ll still have to work hard to get those golden nuggets of marketing wisdom.

Read more: Are You Sabotaging Your Website Optimization Program?

If you have an ambitious vision and a lot of theories you would like to test about the audience that comes to your site, your testing solution should empower you to get the data that will reveal a result, not just give you an answer.

The marketing automation space is crowded with testing solutions. And with all these tools at your disposal it is easy to lose sight of what makes A/B testing such a valuable strategy for your business. You don’t have to be trained as a data scientist to learn a few guiding principles that will help you make the best of your testing strategy. Once your approach to testing has changed from “lets identify the winner” to “lets learn and iterate” you will start seeing patterns that have potential of being highly impactful for your business—patterns that previously would have been discarded as “negative” test results.

Here are the three pillars that will help you get out of the “failed test” mentality:

  1. Define a hypothesis
  2. Have a plan
  3. Iterate

Define a Hypothesis

This is the destination you define when you embark on your testing journey. How does the idea of a test get initiated in your organization? Testing should not be done on a whim when executives cannot agree on a site design and someone timidly throws out the “lets just test it!” suggestion. Every marketing initiative should include a testing step. Whether you are testing the content, design or audience, you should first define a hypothesis. It can be as simple as “we expect to see a change in behavior if a customer is sent through the new checkout flow” or as complex as “We assume having a video on product detail pages will result in a 0.7% lift in new visitor conversion.” Keep in mind, hypotheses should always be formed as a statement, not a question.

Have a Plan

It is not enough to have a goal in mind, you have to have a map and define which route you will take to reach your destination. Identify from the get go what will define the end of your test. Will it be the number of sessions reached? Time lapsed? Or hitting a certain number of visitors exposed to your campaign? If you stop your test when a metric jumps out as significant, you can fall into the trap of high variance that is typical for tests that are just picking up enough traffic and might have false positive results early on.

Download: Capture Test Learnings With This Outcome Matrix

Iterate

When you are ready to review the outcome of your test, use your hypothesis as a lens to analyze your test results. If you follow the first two steps, you will never have an “inconclusive test.” You will know whether your hypothesis has been confirmed or discounted, but don’t stop there. Did anything else catch your eye? Maybe you did not see the conversion lift you were going after but you see that this test resonated with one audience better than with another. We always suggest not being constrained by your Key Performance Indicators, but digging deeper by looking at how different audience segments performed in your campaign. Did this campaign resonate with customers from the Midwest? Did families convert at a higher rate than single households? Did you see a negative lift in people who were coming through your social media channels? It is important to study every test as a learning opportunity by creating a follow-up campaign and capitalizing on those learnings.

Remembering that your testing initiative is a journey will help you keep your eyes open for unexpected results and stay focused on a scientific approach to testing. You do not have to be a data scientist to get into the good habit of setting up your hypothesis, coming up with a test plan and treating every outcome as an opportunity to iterate.

Recently, at a business training session, a facilitator used the phrase “the learning is in the struggle.” Nowhere is that more appropriate than testing. If you embrace the struggle, you’ll learn more about what’s working. The payoff is in the ease with which you’ll be able to make key business decisions.

MarinaMarina Rakhlin is a Product Manager at Monetate, where she focuses on data, developing new capabilities for analytics and reporting. Marina holds an MBA from the Wharton Business School and has worked as a Director of Product for a fashion analytics startup, Stylitics, prior to joining Monetate.

Categories Strategy & Process
  • brianlang

    “.. but digging deeper by looking at how different audience segments performed in your campaign…”

    If you do this, you need to ensure that you have accounted for the multiple comparison problem. If you don’t, you exacerbate the “false positive” issue.

    • http://www.brooksbell.com/ Brooks Bell Inc

      Thanks for the comment, Brian. Great point.