To the brave, thank you for continuing to read!
First, the non-tech stuff
Adobe’s Target Standard was released last year, replacing Target Classic.
The benefits of Target Standard (now referred to simply as Target) are many. The WYSIWYG editor lets non-technical team members easily build, execute, and report on tests. It has an intuitive interface and offers the convenience of preview links and reusable audience targets.
The case of the curious code
Recently, we ran a one-day test in Target Standard. Because we wanted to get a robust view of the data we were collecting, we decided to send Target variables into Adobe Analytics. This is a common practice for us with our clients.
To accomplish this, we write a variable to an eVar that is fed into Analytics that tells us which version of a page the user saw. As usual, we dynamically scraped the page to record the campaign name and variation name.
We proceeded as usual for our one-day campaign.
The next day, when we were reviewing the data, we noticed an unreasonably large difference between the visitors recorded in Target Standard and those written to an eVar coming through Analytics. Furthermore, another campaign using the same implementation—but on a different domain—did not have this issue.
We put on our sleuthing helmets and dug in.
Upon further investigation, we noticed that another campaign that was running without any eVar code was showing up in Analytics, despite us never having the intent to send the data there. To troubleshoot, we decided to run a couple proof-of-concept campaigns.
We noticed two primary differences between the campaign that worked properly and the campaign that didn’t.
One difference was the campaign type. The campaign that worked properly was set up as an “Experience Targeting” (XT) campaign, and the campaign that didn’t work properly was set up as an “A/B Test.”
The other main difference was that one code was utilizing the traditional dynamic method of eVar writing, and the other was hard-coding the variable.
As a company firmly rooted in testing, we decided to collect some data about what was going on here.
We set up two test campaigns:
- XT campaign hard-coding eVar
- A/B test with two variations: A) dynamically writing the eVar and B) hard-coding an eVar
We found that we were able to get the Analytics and Target visitor counts closest with the XT and with Challenger B of the A/B test, confirming that dynamic population was the reason we saw such a large discrepancy.
What we learned
You may be wondering why this has never been an issue for us in the past if this has always been our primary method.
Well, in the world of Target Classic, only one campaign would be delivered to any given page. With Target Standard, any number of campaigns can modify the same page as long as they are not modifying the same element on the page.
Since there were two campaigns running on the same page, and the dynamic method was looking for a campaign name to record, about half the time it was grabbing the wrong one. That’s why the campaign we did not intend to send into Analytics came through.
As technologies continue to evolve, so does our knowledge—and we always enjoy the opportunity to up our game. Since this was found, we’ve ensured we’re hard-coding all eVars when using Target Standard.
Brooks Bell helps top brands profit from A/B testing, through end-to-end testing, personalization, and optimization services. We work with clients to effectively leverage data, creating a better understanding of customer segments and leading to more relevant digital customer experiences while maximizing ROI for optimization programs. Find out more about our services.