As the digital landscape evolves, many organizations continue to grapple with the challenge of incorporating Artificial Intelligence (AI) into their team’s day-to-day practices. AI will reshape how we work, plan, strategize and launch experiences, services, and products, making it much more than simply a technological advancement; it’s a transformative shift that will impact teams at their core. In my role at Brooks Bell, I help our clients design the practices, processes, and systems that make an organization work efficiently and effectively. Because questions surrounding the integration of AI into organizational design continue to become priority agenda items in meetings, I thought it might be useful to centralize insights on how we’ve navigated the journey toward successful AI integration and team member onboarding, focusing on the people, processes, and governance it takes to bring this to life.
Setting the Vision
Expectations couldn’t be higher for the efficiencies, creativity, and innovation that AI will unleash for organizations. Still, those expectations will come crashing down if leadership across the organization lacks a defined vision and purpose for what AI is bringing to the team. Without a well-defined vision of the organization’s future state — including what AI should unlock and why — team members will jump into emerging platforms without clarity of intention and purpose. This will leave team members guessing as to what’s expected of them, how emerging platforms are contributing to their individual, team, and organizational goals, and where they should be looking to leverage AI within current practices. This is a waste of investment and a costly churn on the employees following leadership’s direction (or lack of direction).
To avoid this, I encourage leaders to identify the role they want AI to play in the organization and communicate that message clearly and frequently across the organization. Consider what steps in the process AI can automate, where there are opportunities to integrate within existing tools, how to measure the platform’s success, and how the measures of success for teams will evolve. This foundation will help your teams identify where/how to engage, bring clarity to what’s expected of them within their roles, and will ultimately incentivize the desired behaviors to invest in and integrate AI into their roles.
Building Trust with AI
Regardless of your seniority, tenure or position within an organization, there’s a lingering thought somewhere in your subconscious that AI will eventually replace you in your current role. This isn’t a far-fetched idea. if team members don’t evolve alongside AI, it can and likely should replace their day-to-day responsibilities. But, if AI were viewed as a partner—a virtual team member that tackles tasks that would typically take days or months to do, giving them the mental space to focus on bigger things — team members might be more receptive to the change that AI will certainly bring to their work. When first introducing AI to respective teams, help team members identify the situations where AI might be a natural complement to the work they’re already doing or the activities where AI could bring efficiencies and freedom to otherwise high-effort tasks. Consider platforms that teams already use to make the change less jarring. Introducing AI within their Slack or Teams instance will be easier to manage than introducing a net-new platform for those team members to work within. This helps reinforce the idea that AI works for them, not vice versa.
AI’s Biggest Risk
People can build systems, models, and neural networks, but at the end of the day, they’re only going to be as good as we make them. The more we hand over control to AI, the more socially, psychologically, and emotionally dangerous those platforms can become, perhaps unintentionally, for certain groups of people — especially those underrepresented in the tech industry. While some elements of AI may be governed in certain regulated industries (like healthcare and financial services), most organizations will be forced to accept the platform’s ethics, standards, and governance models.
How can organizations reduce their risk in doing so? In the same way we challenge many of our existing assumptions at Brooks Bell; we must continually define, refine, and validate AI outputs. Redundancies should be built into the process to ensure that accountability is clear and the human element is represented. Establishing clear Ethical Guidelines and Accessible Governance Models for AI is going to become one of the central challenges of this technological transformation.
Acknowledging the high degree of risk involved with AI, there are aspects that I couldn’t be more excited about. Largely, it’s the emerging opportunity to study and better understand the evolving dynamics between humans and tools like ChatGPT, Google Bard, and Microsoft Copilot. It’ll be fascinating to see how people adapt as AI becomes more prevalent and influences our work and behaviors. How innovators and early adopters will begin to answer critical questions and demonstrate business success that the rest of an organization might be most curious about. How we’ll evolve our workflows to build on AI’s strengths while we allow ourselves the time and space to do work that we love. And how organizations will measure the success of AI pilots and validate the ROI of their investments.
AI integration isn’t just a tech upgrade – it’s a transformative journey for your entire organization. To navigate this journey successfully, embrace it with an open mind, define clear strategies, establish rigor and discipline, and a focus on how you can help your teams adopt emerging platforms that will benefit not only them, but the entire organization and the experiences you’re creating for your customers.