Winner winner chicken dinner! Discovering a winning variation is one of the most exciting moments for an optimization program. It’s the moment when all the work that went into creating a test finally pays off. But while you should always take time to celebrate your team’s accomplishment, hold off on busting out the champagne just yet. Your work has really only just begun.
Winning A/B tests can tell you a lot about your customers—what’s important to them, and why they respond the way they do. These results also enable you to quantitatively predict the impact it will have on your business’s bottom line (typically revenue), and project what that impact looks like over the next year.
Once you attribute a value to a winning experience, it’s critical that you also get the experience live on your site. This ensures you’re not leaving money on the table and also maximizes the impact of your testing program.
But to do this, you and your engineering team have to be on the same page. That is, you have to not only understand the way they work, but you also have to deliberately establish a process for implementing winners into your code base.
Most engineering teams operate using the Agile Method.
If you’re unfamiliar with Agile…well, first, what rock have you been living under? (Just kidding. But really?) Agile is a project management method that relies on incremental, iterative work sequences called sprints. For website developers and engineers, shorter sprints usually last 1-2 weeks and longer sprints last 3-4 weeks.
Most Agile engineering teams organize their projects by way of a prioritized backlog. This backlog is often managed by the product team, though other teams can request additions as needed. During each sprint, developers will work to add features and make other site updates based on what’s listed in the backlog.
During a sprint planning meeting, it’s important that you communicate the importance and impact of your winning experience. The higher the impact, the higher the priority, and the more likely it’ll be included in the upcoming sprint.
Of course, delays are common; particularly when your shared development resources are balancing many different priorities.
As an interim fix, you can use your testing tool to push the winner to production.
To do this safely, end the test campaign and duplicate the code into a new campaign, allocating 100% of traffic to the winner. We advise this method because pushing the winner through the original test campaign would risk displaying the losing experience to returning visitors who previously qualified for that experience.
Of course, there are risks to using a testing tool in this way—even if it’s only a short-term solution. While you might be able to cash-in quickly on your winning test, you could also face interference with future tests, maintenance issues and reduced page performance.
Beyond analyzing your results and getting your winner into production, there’s one final step following the identification of a winning test: capitalize on the win within your organization.
Communicating big wins for the business and customer insights drive momentum and support for experimentation within your company. Create powerful case studies; hone your storytelling technique to ensure you leave a memorable impression. Share your successes on Slack, by email, at town halls, or host a webinar…the opportunities are endless. Find the communication channel that catches the most attention in your organization, and run with it!
In our experience, cross-functional alignment is the biggest barrier and the largest contributor to the success of an optimization program. Have any additional ideas or examples of ways to create alignment around testing between engineering, product and optimization teams? Let us know in the comments!