Wharton@Work

September 2024 | 

AI on the Agenda: Three Tactical Strategies for Boards

AI on the Agenda: Three Tactical Strategies for Boards

The internet revolutionized numerous industries by fundamentally altering how businesses operate, communicate, and reach their customers. Companies that failed to adapt to this digital transformation (remember Sears and Blockbuster?) found themselves unable to compete — and eventually they disappeared. Today, it’s increasingly likely that businesses that don’t take AI seriously will experience a similar fate.

AI is poised to create business winners and losers, but as executives and operational teams rush to craft preemptive strategies, their boards can’t sit on the sidelines. The existential threat, and incredible opportunity, posed by AI necessitates their proactive involvement and immediate attention.

“AI is critical to boardrooms in three ways,” says Kartik Hosanagar, Wharton professor and faculty co-director of the AI at Wharton initiative. “First, boards must help guide leadership in defining and coming to a consensus on what success looks like and providing a timeline for it. Second, boards need to ensure AI use is pervasive, which requires a conversation on AI education. And third, they must work to ensure that the company's AI initiatives themselves don't become a source of risk to the company.”

Establishing Clear AI Success Metrics: Defining What Victory Looks Like

The first challenge for boards is to define the goals of the organization’s AI initiatives and establish metrics for success. Early AI projects, many of which will likely fail, provide experience with large-scale data gathering, processing, and working around the limitations of today's AI models. These capabilities must be developed before companies embark on more ambitious projects, and they will remain valuable even as new AI models replace previous tech investments.

“But that doesn’t mean boards should abandon ROI metrics for their company's AI investments,” says Hosanagar, who co-directs the Generative AI and Business Transformation program. “Instead, they need to rethink ROI, focusing on learning, AI integration, and faster release cycles.”

“Given the rate of change in the AI market in terms of the number of new tools being released every week and the rapid changes in the capability set of existing tools, AI initiatives need to be managed differently than past tech initiatives,” he continues. That means iterating much faster, operating in three-month rather than two-year project cycles.

Hosanagar also says organizations should invest broadly instead of deeply, pursuing both long-term and short-term projects instead of one “big moonshot project” on which the future of AI in the company rests. “There is no need for firms to front-load their investments into AI. You can keep costs manageable by hiring slowly, yet consistently, over time and making use of marketplaces for machine-learning software and infrastructure.”

Championing AI Learning

As the board and executive leadership team develop an AI strategy, they must also create comprehensive AI education strategy across the organization to combat anticipated inertia and resistance. “People feel threatened by AI,” says Hosanagar. “They hear the doom-and-gloom in the media and the fact that most use cases today are about cost-cutting. Boards should push the C-suite to clearly define their AI vision to their employees: How is AI going to grow revenues, augment the workforce, and help people unleash the best version of themselves?”

Education will also help the workforce develop rational expectations. Hosanagar explains, “Research shows that people don't trust algorithms after seeing them fail, even if that algorithm is performing way better than humans. Unless we train people, they will resist imperfect AI — and all AI is imperfect — even if it can deliver huge improvements.”

Ongoing efforts are also needed to keep pace with the new tools and models that are constantly emerging. These applications can enhance workflow and outcomes across the organization, so education should focus not only on top engineering and product teams, but also on non-technical employees.

Managing AI Risks

Hosanagar stresses that while AI presents many opportunities, it also represents a new form of risk for companies. “Large AI failures can drive reputational and litigation risks, as well as invite unnecessary regulatory interest,” he explains. “Boards need to ensure that comprehensive AI governance frameworks are established. These frameworks must encompass ethical guidelines, compliance requirements, and risk-management protocols. It’s not enough to simply implement AI; there must be oversight mechanisms in place to manage these systems effectively.”

“These mechanisms should include maintaining an inventory of high-risk models and applications, along with their continuous monitoring and assessment,” he continues. “Assessments should cover everything from the quality and potential biases in training data to stress-tests of the underlying models. It’s also important to scrutinize the output of these systems, paying particular attention to the frequency of hallucinations and whether the AI outputs come with explanations for users. This 360-degree approach to AI assessment ensures a holistic understanding of the system’s performance and potential risks. Fortunately, many vendors are now emerging to support these governance activities.”

The evolving regulatory landscapes must also be an area of focus. Boards must stay informed and be ready to adapt their governance frameworks accordingly. It is essential that they engage with policymakers and industry groups, allowing their companies to help shape regulatory environments while also preparing for compliance with new rules.

“If companies haven't put the right governance frameworks in place,” says Hosanagar, “the AI risk may not come from being slow in adopting AI but in being too fast without the right checks and balances. At the same time, overly burdensome screening can slow AI adoption. For example, many companies are setting up multifunctional AI committees who need to bless AI tools before a rollout, but most of these committees haven’t figured out a methodology. They find it easier to sit on the decision than to make one. If the leadership of your organization wants to be AI-forward, the AI committee should not be the source of resistance within the company. The solution is to bake in a process for experimentation that allows for limited rollouts of AI applications as well as faster approvals of low-risk employee-facing applications.”