Testing: 6 Ways You Could Be Doing It Wrong

Companies can no longer afford to make marketing decisions based on even the most respected opinions. Gone are the days when inclinations and research alone are enough. As a marketer, if you aren’t already, you should be testing. But it’s not wise to just jump in without a plan. There are several best practices that make testing a more successful experience.

Here are 6 common testing and optimization pitfalls marketers should avoid.

1. You test without a goal in mind. What are your business goals and what do you want your online marketing to accomplish?  More conversions? Higher average sales? More newsletter registrations?  It doesn’t matter what the goal is, as long as you have one and test towards it.  Let’s say your CEO has tasked you with increasing the average order value of transactions on your website.  You choose to implement a free shipping test and by the end of the test, your conversions show a significant lift.  You want to make sure to take a deeper look at the data.  Did the free shipping test just increase the number of conversions?  Or did it also increase the AOV of your sales that month which was your goal? You can have a good test, but whether it was successful or not is really based on the goals you are testing towards.

2. You fall victim to congruence bias. Don’t test to prove yourself right rather than seeking out the best answer.  The reason you are testing is because you want to really know what your users think and how they behave.    It is very easy to look at the data one way or manipulate it to prove an idea right or wrong.  You want to make sure you, or a member of the team, is analyzing the data and looking below the surface. What you find may surprise you.  For example, one of our conversion teams at Brooks Bell was running a test for an ecommerce client.  We had a successfully optimized landing page running with a set of branded PPC terms.  We wondered if that same page would have as good of a showing on the non-branded set of terms and decided to test it.  Looking at the average order value (AOV) numbers alone, we saw a 26% lift.  Initially, it appeared that our hypothesis was right.  Next, the team examined the revenue per visitor (RPV) and sales numbers.  The 26% lift we gained in the AOV was not enough to counteract the 23% loss we saw in the RPV.  You can implement a test and see that, yes, it increased one metric.  But did you see that same lift with other metrics? Which one is more important to your company? Analyzing the data from different angles will give you more of a complete picture and allow you to recommend better business decisions.

3. You Don’t Have a Testing Strategy. Testing is a super exciting process.  Watching new ideas, designs and concepts increase conversions on your website or landing page is a very rewarding experience.  It is easy to get too excited, especially after a big win.  Make sure that you and your testing team stay methodical in your process. Don’t just chase after the big wins and hop around your website.  Test with a strategy and goals in mind.

4. You leave the IT Team out of the process. As a marketer, one of the most important things your team can do is join forces with your IT team.  Even the most tech savvy marketers don’t know where all the hiccups lie in the development of a test.  Your tech team will be the group that executes your testing and optimization ideas, helps to analyze the data and can help you navigate those land mines ahead of time.  Set-up regular, joint meetings between the IT and marketing teams.  Keep the communication lines open between the departments and make sure to let the tech team know that their knowledge and ideas are appreciated.

5. You ignore small tests. As marketers, we are often creative and visual people.  We love clean design, great photography and powerful copy.  We often think that adding more pictures, detailed copy, etc. will lead to a better customer experience.  Don’t lose sight of smaller tests and tests that actually reduce the information you provide the customer.  During a recent team brainstorming session surrounding a client’s product page, we realized a lot of users were clicking on the additional information links provided on the page and not converting.  One link actually received 58% of the total number of clicks on the page. Since there appeared to be an interest in this additional information, but it wasn’t leading to actual conversions, we began to hypothesize that all of these extra links may be too much for the user and causing decision paralysis.  We developed a challenger  cell, which included a reduction in the number of additional information links.  The challenger actually led to an 18% lift in conversions for our client.  Just one example of a test that proved less is more!

6. You don’t allow a test to stabilize. Imagine this.  You just launched a test and day one, the lift in conversions is amazing and you have already achieved a 99% confidence level!  You get excited, do the happy dance and start sharing your big win with others in the company. Then out of nowhere, several days later, your test starts to tank.  Your conversions bottom out and suddenly you have a 99% confidence level that your test cell is losing.  What happened? Why did your test go from ‘hero to zero?’ What marketers need to understand is that all tests need at least 4-5 days to normalize.  Without diving too much into the details, testing tools such as Adobe Test&Target, Monetate, Optimizely, Visual Website Optimizer and Google Analytics, all take some time to get the results under control and stabilize.

What other testing & optimization pitfalls should marketers avoid?  Tell us about them in the comments below.

Categories Strategy & Process