Wharton@Work March 2025 | Leadership The Business Case for Proactive AI Governance AI is already woven into the fabric of our lives, shaping everything from digital interactions and shopping to travel, medicine, work, and more. It's revolutionizing business operations, curating ads and entertainment recommendations, driving medical breakthroughs, managing home energy and security systems, and even balancing our portfolios. But while almost every consumer uses products with AI features, a new Gallup survey finds almost two-thirds of them aren’t aware of it. Globally, while a majority want generative AI integrated into their shopping experience, satisfaction with the technology dropped from 41 in 2023 to 37 percent a year later. The latest Edelman Trust Index finds that trust in AI companies has dropped globally from 61 percent to 53 percent in the last five years. In the U.S., the drop is more precipitous over the same period, from 50 percent to 35 percent. It seems the more widely it is adopted, across industries and applications, the greater the disconnect between AI and its end users. To bridge this gap, businesses leveraging AI technologies must proactively prioritize transparency, ethical safeguards, and responsible innovation — ensuring they stay ahead of evolving regulatory developments and maintain consumer confidence. Since the publication of a previous version of this article in May 2024, several significant advancements have emerged that can guide them, underscoring the importance of timely, proactive AI governance. “Even organizations that haven’t made a significant investment in AI are feeling the pressure,” says professor Kevin Werbach, chairperson of Wharton’s Legal Studies & Business Ethics Department. “Most leaders, even those in tech, are thinking about AI only in terms of the technology. But that’s not what their customers are thinking. Very quickly they are starting to have to answer questions like, ‘What about the problems of bias? What about privacy?’” The Evolving Regulatory Landscape The European Union (EU) has made substantial progress with its AI Act, aiming to establish comprehensive guidelines for AI applications. This legislation seeks to ensure that AI systems are transparent, safe, and respectful of fundamental rights. It categorizes AI applications based on risk levels (minimal, limited, high, and unacceptable). Companies operating within the EU or engaging with EU citizens must prepare for stringent compliance requirements to avoid potential fines and operational disruptions. In the Asia-Pacific region, the Association of Southeast Asian Nations (ASEAN) endorsed the ASEAN Guide on AI Governance and Ethics, which includes seven guiding principles for AI development, a voluntary AI governance framework, and a series of national and regional recommendations. China has emerged as a leader in AI governance with its Interim Measures for the Management of Generative AI Services, requiring AI-generated content to align with national values and obtain government approval before public deployment. By early 2025, more than 40 AI models had been approved, reflecting China's strategy of balancing control with innovation. AI policy in the United States continues to be shaped predominantly at the state level. States like Texas are proposing legislation, such as the Responsible AI Governance Act, which would impose obligations on developers and users of high-risk AI systems, particularly those influencing critical decisions in employment, health care, and finance. However, at the federal level, AI regulations have been rescinded recently. Businesses operating in the AI sector should closely monitor these federal policy shifts, as well as varying state regulations, to adapt their compliance strategies accordingly. The Accountability Imperative Werbach, who leads the new Wharton Accountable AI Lab, says he chose the lab’s name with intention: “’Accountability’ is about making connections between the risks, the potential and real harms, and what actually happens to prevent them, to mitigate them, to understand and address them. It’s about having all those practices in place and doing it in a thoughtful, systematic, structured, rigorous way, which is consistent with how we think about things at Wharton.” He continues, “Accountable AI is not just thinking about the risks, it’s not just asking from an abstract perspective what principles organizations should have about what they are doing with AI, and it’s not just saying we should be responsible about AI. It’s about determining how to put into place the kinds of practices and understanding it takes to ensure that AI systems are deployed and developed in ways that maximize their benefits and appropriately mitigate and address or redress the problems and harms.” “One of the things I have found in speaking with companies is that most of them are really struggling to get on top of these issues. They don’t understand what other orgs are doing,” Werbach says. “There are a few companies that are very far advanced, especially some of the big tech companies, which have invested significantly in responsible AI and AI governance. But even they have questions about what they should be doing, whether they are appropriately addressing these issues, and what the data show about what kinds of governance are most effective. But most companies are not even at that point.” The Case for Companies to Lead, Not Follow Werbach says there are three reasons leaders need to understand and get ahead of the legal and ethical implications of AI now. The first relates to companies in regulated industries whose existing rules can be applied to AI. “If you're a health care company,” he explains, “you may be looking at your patient data and asking if you can use it to train a large language model. What about sharing it with your partners? Can you use it to create some sort of AI-based system, such as choosing people for clinical trials? These questions can be answered at least in part with existing HIPAA laws.” Companies in other regulated industries, such as finance and telecom, are also familiar with the kinds of legal and ethical issues brought up around AI. “They are going to be looking at data governance sooner in the process than those in other industries, in response to the European GDPR [General Data Protection Regulation, which applies to any company with European customers] and other kinds of requirements,” Werbach says. “They are already answering the questions about where their data is coming from and whether they are collecting personal data, so they understand the need to get ahead of governance and ethical issues concerning AI.” The second reason to get ahead of AI accountability is to avoid negative publicity and embarrassment. “Nobody wants to have egg on their face,” says Werbach. “Cases of bias are very embarrassing to companies, and there are many lawsuits covered in mainstream media that have already been filed about copyright, licensing of data, and privacy going through the courts. But even if you’re not violating any laws, you want to avoid situations that could erode consumer trust and generate negative publicity, like building a chatbot that says something wrong or crazy, or that yells at a customer.” Werbach illustrates the third reason with a familiar joke: Two people are on a camping trip when they hear a bear. One of them starts putting on their running shoes, and the other person says, “You're not going to outrun the bear.” The first one replies, “I don't have to outrun the bear. I just have to outrun you.” Werbach says, “Companies need to make sure they’re not in the bottom quartile regarding AI accountability, because that’s who the regulators are going to go after. You want to be able to say, ‘Here's our program. It might not be perfect, or 100 percent effective, but we can show you what we're doing.’” Recent research indicates companies are heading in the direction of accountability. A study described in the Wall Street Journal reveals that 60 percent of large U.S. companies discuss AI-related risks in their 10-K filings. The Financial Times reported that more than half of Fortune 500 companies identified AI as a potential risk in their most recent annual reports, a significant increase from 9 percent in 2022. “There is no doubt that an AI regulatory tidal wave is coming,” says Werbach. “You want to start building up structures and processes around AI now. If you wait until the last minute, you will have to rush to put something in place, potentially exposing your firm to legal liability. And with consumer trust already dipping, that kind of haphazard effort could cause it to erode further.” To prevent those last-minute, expensive, and arbitrary endeavors, Werbach is serving as academic director of the new Strategies for Accountable AI program. Offered live online over eight weeks, it provides a holistic, interdisciplinary exploration of current AI uses and their legal and ethical implications. Participants will learn from and interact with faculty experts in management, marketing, law, technology, and analytics. Werbach also hosts a podcast — The Road to Accountable AI— that explores these topics and latest developments. “Every organization is going to be spending millions, if not billions, of dollars on AI governance just as they're spending it now on data protection and privacy. They need to be educated about how to think about it and spend those dollars wisely,” says Werbach. Share This Subscribe to the Wharton@Work RSS Feed