We have updated our Privacy Policy and Privacy Options

Got It Button arrow

Is AI up to No Good? What Research, Insights and CX Leaders Must Know

Share

There have been tremendous advancements in CX technologies, but this one may be the biggest yet. You guessed it, I am talking about AI. At Brooks Bell, I lead programs to help organizations get the most value possible from insights throughout their organizations. There’s no question that AI is going to play a large role as teams onboard AI

Let me begin by saying I am excited to see how tools like ChatGPT, Google’s Bard and Wevo will enhance our processes and insights. On the other hand, I do have concerns about AI meeting the soaring expectations that have been set. I fear companies may make mistakes that will hurt their teams and long-term success.

In this article, I’ll be covering some of the considerations, benefits, and challenges that Research, Insights, and CX teams can expect. I’m also going to share some tips for getting started.

You are the People-Connector, Not AI

Leaders in Research, Insights, CX and related positions are people-connectors. We have the expertise and resources to get answers to burning questions, we can add the human-element to the huge amounts of quantitative data in our organizations, and we connect insights in one part of the organization to another.

I, and many of my counterparts, are excited to see the role that AI can play in this space. White technical applications will shake out, I have one major piece of advice: remember the importance of being a people connector. AI isn’t going to build and foster relationships – only you can. So while you may find new, cool applications for AI – ensure you’re doing what you do best when it comes to helping people learn, share, and grow together.

AI as a Research Assistant: The Big Test

It only takes dropping a few sentences into ChatGPT and asking it to write a customer persona or summarize brand perception to see that AI can’t be ignored in our space. This exciting and scary. It’s exciting that we have advanced technology to quickly put our research study into words for us, but scary that some people may overuse it. By ensuring your team knows AI isn’t going to take their job as a researcher, they’ll feel more secure. And by integrating AI usage at the right time and in the right way, you’ll offload some time-consuming tasks so they can focus on more impactful things. It can be a win-win, but you have to do it right.

Here’s how I recommend getting started: First, look for opportunities in your existing process to automate. Determine what tools you’ll use, when you’ll use them, and what points of redundancy or validation are needed. Establish these rules up front and communicate clearly with the team on how you see AI complementing their expertise.

I also highly recommend doing an experiment. For example, take a research study that’s been completed. Only this time, use AI at those points in the process you identified to be good candidates. Document the pros, cons, what it got right, what it got wrong, and where human engagement is critical. Do a comparison evaluating things like quality of output, level of effort, and time needed. This real-life test will yield valuable learnings that you can apply to your future usage of AI.

It’s critical that your team knows how to work with AI – not against it or instead of it – so do what you can to help your team evaluate and establish the right boundaries.

Is AI a Con Artist?

The intricate workings and coded logic of AI remain largely shrouded. There’s limited visibility into its mechanisms, and implicit trust is very risky. That’s why I suggest treating AI as a con artist in your organization. If you assume AI is going to push boundaries and perhaps be out to get you, you’ll proceed with caution. And at this stage, I believe that’s the right approach.

This leads to governance. Deciding on accountability is very important. Where does the blame lie if a junior designer produces sub-par work via an AI tool? Is it the individual? The AI? The team leader? What checks and balances are built in to ensure only the right, intended outputs move forward?

Keep a watchful eye on AI and ensure you aren’t handing over control without proper governance. I predict we’ll see stories about this playing out in real life as AI adoption continues.

Top 5 Best Practices for Onboarding AI

Let’s quickly recap my tips for onboarding AI like a best-in-class organization:

  1. Remain a people-connector, even though AI may pull you away
  2. Evaluate your process to identify places to integrate AI and free up your teams for different types of work
  3. Help your teams leverage AI as an assistant, embracing its efficiencies while valuing human expertise
  4. Apply AI to an existing project to measure the quality, level of effort, and time needed for each. Use this comparison to help tailor your AI solution.
  5. Treat AI as a con artist in your organization, trying to pull one over on you at every turn. This skepticism will protect you while teaching you where AI excels and lags.

While the AI wave is exhilarating, navigating with an informed compass is crucial. Its introduction promises a new era in research, insights, and CX – but the human touch remains irreplaceable. Balancing the two is the key to a brighter, more efficient future. If you or your organization needs help, I’d love to brainstorm with you.

Originally published on LinkedIn