What you’ll get from this post: Three scenarios to use a testing tool to implement winning variations and three problems that might make it a bad idea.
Estimated reading time: 2 minutes, 45 seconds; approximately 545 words.
Taking a test from the brainstorming session to concepts, development to launch is cause for small celebration in itself. And when the test generates a big lift, it’s even better. It would be easy to think that once the win is logged, the hard work is over—but often, it has just begun.
Depending on the IT and development infrastructure in an organization, implementing a variation—even one that is a proven winner—can be a long and fraught process. Sometimes, this leads testing groups to lean on the testing tool as a means of serving winning experiences. Turning a test to 100 percent with the intention of keeping it there can make sense—but it can also cause problems.
Here are the top reasons to use a testing platform to implement an experience:
1. The testing tool offers unique functionality
In some cases, a winning variation may require the use of a testing tool. One common example is the use of targeting—functionality few sites have built into their source code. When this happens, using the testing tool to power the targeting elements makes a lot of sense.
2. Implementing the winner is low on the queue
Another time a testing tool can be useful for implementing a winning variation is when IT or development teams plan to integrate the design, but have several more pressing projects in the queue. In this scenario, it is probably not worth removing a proven success just to wait for implementation. Serving it through the testing tool allows for a seamless transition once the development queue catches up.
3. A code freeze or sprint cycle would hold it up
Similar to the situation above, IT or development teams could be enforcing a code freeze or be locked into a sprint cycle that would delay the implementation of a winning variation. Again, serving the winner through the testing tool during this period makes sense.
But there are some problems with using this method, too. Here are some important issues to consider when using a testing tool for implementation:
1. Interference with future tests
Making a winning variation the new control is easy enough, but as tests begin piling on and the control gets tweaked more and more, development and setup will inevitably become more complicated. If future tests are going to take place on the same page as previous winners, the code from those challengers must be incorporated into the new variations. Otherwise, it would be overwritten. These complicated setups can slow development of tests, increase QA time, and hurt overall test velocity.
2. Potential maintenance issues
Test code references specific elements in the source code of a page. When a variation is run for a long time, the chance this source code will change increases. And when the source code changes, the test code can break. This problem is particularly acute when testing is managed by one department—marketing, for example—and the website is managed by another—like IT.
3. Reduced page performance
A well designed test will not noticeably hinder page performance, but it’s a simple reality of the technology that there is some lag between the source code loading and test code modifying it. This may not be noticeable when your test is initially developed but if something changes on the site—even if it doesn’t break the test code—it can impact this load time, creating a noticeable “flicker.”
Testing tools offer a fast, convenient way to push site changes to users—without getting help from IT or developers. But it’s important to realize that using testing tools in this way can have tradeoffs.