Proper testing culture is the foundation of successful online optimization. Over the past several years, I have identified four building blocks that are essential before the first test is launched.
The ability to identify what the problem is, even when things are going great, is a proactive way to look at testing. What is leading up to sale, what are people doing after my initial communication, how can we further focus on client activity, what is our true goal?
These are all great questions to answer. Sometimes you may be identifying the wrong problems. Often times testing is approached with statements like “we need more traffic” or “we need higher click-through rates” when ultimately, the process just needs a better quantitative measure to determine what you need. Implementing proper tracking will provide this.
In many cases, the problem is simply that proper post-click tracking is not on the landing page, or conversion flow post email, newsletter, or banner ad click. A whole new world will be revealed when you learn more about the “problem” through proper tracking procedures. This will allow you to get a better sense of A) what communications are turning into conversions, and most importantly B) are you focusing on the right problem that will result in higher revenue?
Make sure that those tracking capabilities for on-click and post-click are in place. Otherwise, defining the problem can be extremely subjective.
The question for many clients is the justification of test iterations and the understanding of how those basic iterations will affect business goals. The answer is that these iterations will slowly, but surely provide (sometimes small) performance lifts. These lifts, when done properly, can be .0001% or sometimes less. Just note, there is some progress. The key to remember is that this will take time and the time is relevant to traffic. The more traffic you have the less time it will take to get results. A lot of testing situations have low or limited traffic and will need patience– patience in receiving relevant test results that deep statistical significance. Statistical significance is usually dependant on around 100 successes, and this can take a while in some cases. Longer campaigns take 2 – 6 months to achieve statistical significance. This is where patience comes in.
You see a lift, you identify a winning test cell and then you need to reconstruct a hypothesis, and apply to the next test. This process needs to be repeated, and repeated again. A persistent approach to testing will be the road to success. Build a case study of the test iterations, timelines and wins attributing to both success and fail of new approaches. When you roll up the campaign at the end of the year, you will not only have seen those incremental lifts, but you will have established best practices for future efforts.
Without capable and available human resources, you are bound to miss the mark before even starting testing procedures. If you don’t have the time or knowledge of how to integrate tests, or to analyze results, you may need to consider outside resources.
Having someone that can truly decipher good data and confirm the test will perform and provide you with the insight you are looking for, is key. It is also important to have someone who can advise you on key learnings from test results, and how to interpret those learnings for application in future testing efforts.
Hiring this type of “analytics lead” is extremely important for success. Otherwise, you will limit yourself to actual learnings and actual results. There are many times when we review case studies and the result of testing is actually inconclusive, or simply wrong. Someone who can QA the test tracking and sample test before deployment will also save a lot of time and confusion.
Bottom line; make sure that you have a competent driver when you get behind the testing wheel.