With the number of testing and personalization tools available, it can be difficult to choose one to invest in. But once you’ve already selected a software, making the decision to transition to a new tool altogether can feel overwhelming.
But this happens quite often. For many clients, cost is often the deciding factor in making the decision to switch testing tools–there are a few testing tools that offer similar capabilities at a lower price point. On the flip side, if you’ve increased your program budget and capabilities, it may be time for an upgrade.
And although all testing tools offer similar functions, each has unique features that are important to consider. Personalization, for example, has become a point of focus for many testing programs – perhaps you’re interested in transitioning to a tool such as Evergage or Dynamic Yield that puts personalization at the forefront. Or your testing program has enough velocity to run multiple experiments simultaneously, and you feel you’d make good use of Optimizely’s built-in mutually exclusive experiments feature. Maybe your company uses other Adobe products, like Adobe Experience Manager, so you feel Adobe Target is a good fit.
Regardless of which tool you select, once you select a new software–the next major obstacle is implementing it. Here are our tips for going about the process:
First, examine your testing roadmap.
Take inventory of the tests that will be running close to the date when you plan to stop using your previous tool. Make sure they will have reached significance and be ready to be turned off before you lose access.
If your budget allows for it, we recommend giving your team a period of time where both tools are available. This will ensure your testing cadence isn’t affected while your team gets up to speed on using the new tool and allows you to transition more seamlessly – you’ll be able to let current tests run their course in the old tool while launching new ones in the new tool.
Then, test your testing software.
While you might be excited to dive in and start launching tests left and right, it’s important to take the time to ensure your new tool is implemented correctly.
Run a QA test that visually changes the page to check that the code is being delivered and the flicker looks reasonable. If there are a lot of flickers, you may need to move the testing tool tag higher up in the head of your HTML.
We also recommend running a live test without visual changes, just for the purpose of checking metrics. This enables your analyst to see that metrics are being tracked correctly within the testing tool, or if you’re using an outside analytics tool, that those metrics are being passed accurately to it.
Once you’ve confirmed that visual changes are showing up as expected and metrics are tracking accurately, you’re ready to start using your new tool!
Switching testing software comes with its challenges. However, in the right circumstance, switching can offer substantial benefits to your testing program. Taking the time to pinpoint your reasons for switching, plan your testing roadmap carefully around the transition, and having patience as the new tool is implemented will ensure your tool transition goes smoothly.
Brooks Bell has over 15 years of experience working with enterprise brands to establish and scale their experimentation programs. We take a holistic approach to our technical diagnostics and analytics services, providing technology and data recommendations based on your business, your goals, your team, and your unique challenges.
What can Brooks Bell do for you?
✓ Clean, organize and centralize your customer data.
✓ Help you select the right a/b testing and personalization tools.
✓ Ensure your tools and systems integrate with one another.
✓ Train your developers and analysts.