We have updated our Privacy Policy and Privacy Options

Got It

It’s Time for All of Us to Close the Utilization Gap

Share

At this point, many companies have been testing for years—and many are still struggling to attain the key characteristics of successful programs. This struggle leads to frustration and frustration leads to speculation. Maybe it’s because testing tools aren’t tailored for mobile. Maybe it’s the difficulty of combining online and offline data. Or maybe it’s due to a lack of advanced personalization algorithms and predictive models. For testing to finally be successful, this logic suggests, ever more advanced methods must be employed—and the testing tools available just aren’t keeping pace.

The reality, however, is much different. Testing tools continue to add features, improve interfaces, and increase the sophistication of their built-in analytics. In fact, most testing tools on the market today allow us to build, launch, and analyze experiments of incredible complexity—and vendors are making the process easier all the time.

close-gap480x192

Indeed, it’s not the lack of advanced capabilities that hold testing teams back. Frustrated programs are separated from successful programs by repeated stumbling over the same fundamental hurdles—and these hurdles have little to do with the power or functionality of the tools themselves.

Instead, the frustration is caused by a utilization gap—a discrepancy between the capabilities offered by testing tools and an organization’s ability to take advantage of these features. In most cases, the utilization gap is caused by a combination of three things:

utilization gap diagram photo

1. Lack of Talent
When it comes time to expand a team, finding good analysts and developers can be an intractable problem. Research from McKinsey & Company has estimated a shortfall of 190,000 trained data analysts in the United States alone. And, as more companies decide to invest in Big Data and related fields, competition for the talent that does exist will only become more intense.

In addition to a lack of talent generally is the problem of having the right kind of talent. As testing expands in an organization, it’s common to find people occupying roles lack the competencies necessary to perform in the changing business environment. A survey of IT professionals conducted by SAS and IDG Research, for example, found that 57 percent of respondents said they lacked the skills to properly analyze data.

2. Unproductive Culture
Talent, of course, is just one of the contributing factors. Additionally—and perhaps more importantly—a company’s culture can enable or inhibit the work of those in the testing team. This is most evident in organizations where analysts are inundated by superficial reporting requests. Instead of spending time developing a deep analysis of data and forming new insights, analysts are reduced to “report monkeys” forced to perform a mechanical task that requires little thought and only a fraction of their expertise.

Building a data-driven culture is difficult, but with careful communication, tailored to the right audiences, managers and CMOs can create an environment that encourages the growth of individual team members and the program as a whole.

3. Inefficient Process
Poor documentation and a lack of governance lead to frequent busts and a loss of trust in data and testing. But process is important for more than just avoiding broken tests. It provides the framework in which testing takes place, outlines the means of communication across teams, and can help testing teams increase velocity and efficiency.

The utilization gap reflects the capabilities of the testing team but it also represents their perceived value of the testing tool. This perceived value is independent of the price and depends on the product’s ability to satisfy the needs of the customer. In other words, testing platform vendors are hard at work developing advanced features even though many customers struggle to make use of the basic toolset and, as a result, the perception is that the tool is somehow deficient. This perception creates a negative feedback loop that reinforces inefficient processes and poor practices; justified by a belief the tools available are inadequate.

For this reason, it’s critical the entire testing industry—from analysts to CMOs, consultants to testing tool developers—work to close the utilization gap. Doing so will help increase the perceived value of testing tools and, more importantly, create rapid advancement of testing and optimization through an elevation of talent, culture, and process.

Categories