As an industry, we are on a collision course with mediocrity.
In the beginning, we were captivated by the promise the Internet held for marketing. Supported by new technology we could suddenly apply the principles we had perfected in direct mail to gain new insights and achieve statistical significance in a fraction of the time. And better yet, as the response rates declined in direct mail they were skyrocketing in digital marketing channels.
But testing wasn’t easy at first. In many cases, we were the only ones banging the drum on the testing bandwagon. We needed to find an executive sponsor to support the program and argue for the expense of the tool. As the Internet evolved from a branding vehicle to an engagement and conversion platform, we also needed to change the way we engaged with customers transitioning from a megaphone approach to a relationship-based one. But once we achieved those two steps we still needed to justify the expense and prove program ROI.
We needed highly trained people with specialized experience to build and analyze tests. We needed to work with IT, regardless of the test complexity. The only way we could run more tests was to hire more people. And running more tests meant buying more mBox calls. So that’s what we did. We recruited more people to execute more tests. In the beginning it seemed to work: We got wins. We proved program ROI. We made the stakeholders happy. But now we have a problem.
READ MORE: You Bought the Plane, But Where’s the Pilot?
The problem is that we built up an appetite for testing satisfied by output not learning. We paid little attention to why a test won or lost and more attention to how big the team was, how fast tests could be launched, and how big the lift numbers were.
And now, testing tools are easier than ever to use. Everyone seems to be testing now! This should be a good thing but incremental lifts are getting smaller. And stakeholders still want more.
As an industry, we are demanding more customer insight from our tests but we are staffed with the wrong type of experts. Tests have become too easy to execute, which is leading to lazy ideas, defaulting to “best” practices, and dilution of strategy by people without the experience or expertise necessary. We have experts in execution and we need experts in strategy. At the same time, more websites are testing, which means small tactical tests might still win but they won’t be as competitive as they once were because everyone is doing them. We have testing programs—which is good—but they are directed towards mediocrity.
But we can alter this course in the following 3 ways:
1. Educate the CMO that optimization is about business and customer insights, not just incremental lift in sales.
2. Fight for goal metrics that are about the further understanding of the customer.
3. Continually optimize our internal processes so that our existing staff can handle increased volume without hiring more doers.
Testing and optimization has helped digital marketers make huge improvements in the websites they manage—often resulting in huge lifts in revenue. But this early success can’t continue forever without smart management and oversight. It’s time to begin building teams and processes for the next phase of testing.
Read more from Adobe Summit 2014:
Focus Returns to the Customer. Finally, We’ve Sobered Up
Reinvention Requires Risk. But Risk Doesn’t Have to be Risky
Important Takeaways From the Opening Session of Adobe Summit 2014