We have updated our Privacy Policy and Privacy Options

Got It Button arrow

A/B Testing Analysis: Top Questions to Ask Your Analysts

Share

Sometimes, when I’m walking our clients through test results, I get the impression that they’re hesitating before asking me any questions.  Maybe they think their questions are too obvious or will make them look like they don’t know what they’re talking about.

Regardless of the reason, here’s a little secret: Good analysts like it when you ask questions. Because questions show that you’re listening, that you’re internalizing what we’re reporting to you, and that you care about the numbers.  And this—quite frankly—makes us happy.

Additionally, irrespective of whether you’re directly involved in testing or simply an executive stakeholder, I believe that it’s your right to truly understand the results that are put in front of you before you accept them as truth.

And the thing is, you really should care about the numbers. If no one cared about analytics, you would just make changes to your website under the assumption that all your customers will respond to those changes in the way you want them to.

You’ll also never know if the hours your team spent strategizing, brainstorming, concepting, and having those…um, ”constructive” debates around button size, imagery, and headline copy were worth anything if you’re not also measuring the impact to your customers and on your bottom line.

It’s also important to align on what questioning is not.  While I encourage questions, I, of course, discourage challenging or belittling.  Rather, questioning should really be about taking time to understand the test reports, presentations, charts, and how it impacts an overall experience.

If something looks askew, ask about it. If you’re working with good analysts, they’ll take time to explain the process of assembling key data points to you.

Here are 10 questions to ask your analysts that will help you better understand your test results:

  1. Did the test reach the predefined required sample size?
  2. What was the Minimum Detectable Lift that we set in advance of launching the test?
  3. What do you think drove the change in the Primary KPI?
  4. Were daily performance trends consistent?
  5. Did a specific customer segment drive the positive (or negative) performance?
  6. How might seasonality factor in? Do you think the time period during which the test ran would influence expected performance if the change is made sitewide?
  7. Did Conversion Rate or Average Order Value (AOV) have more of an impact on the change in revenue?
  8. Did Average Unit Retail (AUR) or Units Per Transaction (UPT) have more of an impact on Average Order Value?
  9. How did progression throughout incremental funnel steps differ between the control and challenger?
  10. Why do you think that is within the context of the test? (Note: this is a question that could immediately follow questions 3 – 9)

In the end, your analysts will appreciate the dialogue that you open up by asking these questions, and the subsequent alignment that happens–because when everyone can speak the same language around testing methodology and results, we all win.