Wharton@Work

May 2024 | 

Accountable AI: Reasons to Get Ahead of the Regulators

Accountable AI: Reasons to Get Ahead of the Regulators

AI is permeating nearly every aspect of our lives, shaping our digital interactions, shopping, travel, medicine, work, and more in ways both subtle and profound. It is transforming business operations, choosing the ads we see and recommendations for what to watch or read next, interacting with us online and over the phone, guiding medical breakthroughs, managing our homes’ energy and security systems, and balancing our portfolios.

Yet the more it is adopted, across industries and applications, the greater the disconnect with consumers. New reporting from UNESCO, KPMG, and others shows steep declines in the public’s trust of AI, with no reason to expect them to rise any time soon. According to Edelman’s Trust Index, percentages fell from 50 to 35 in the U.S. and 61 to 53 globally.

“Even organizations that haven’t made a significant investment in AI are feeling the pressure,” says professor Kevin Werbach, chairperson of Wharton’s Legal Studies & Business Ethics Department. “Most leaders, even those in tech, are thinking about AI only in terms of the technology. But that’s not what their customers are thinking. Very quickly they are starting to have to answer questions like, ‘What about the problems of bias? What about privacy?’ Europe just enacted a law about AI, and although the rest of the world is not talking yet about compliance, I expect formal legal requirements will come fairly soon.”

Werbach, who has been teaching a Big Data, Big Responsibility MBA course since 2016, says the organizations that are furthest along in the AI journey are asking those questions first and investing in building AI governance to improve trust and safety around AI. “But many others are starting to realize that they don’t want to end up with a legal or ethical issue. They may be using existing AI technologies to automate functions, screen resumes, or personalize messages to customers. But as all of these applications take place, leaders are recognizing that they need to have an answer to policy, ethical, and legal questions.”

Smaller companies, startups, and even consultants who are sharing “Responsible AI” frameworks with their clients have questions about how to manage the risks. They are realizing that the implications of deploying AI, of having an answer to the “What's your responsible AI strategy?” question, are relevant across and within organizations.

As Werbach explains, “The board is asking, investors are asking, and managers are asking. We don’t have regulations yet but a tidal wave is coming, and everyone wants to understand and think through the requirements. Right now, you can get away without a formal, comprehensive program, but that is not going to be the case for long.”

Why Companies Must Lead, Not Follow

Werbach says there are three reasons leaders need to understand and get ahead of the legal and ethical implications of AI now. The first relates to companies in regulated industries whose existing rules can be applied to AI. “If you're a health care company,” he explains, “you may be looking at your patient data and asking if you can use it to train a large language model. What about sharing it with your partners? Can you use it to create some sort of AI-based system, such as choosing people for clinical trials? These questions can be answered at least in part with existing HIPAA laws.”

Companies in other regulated industries, such as finance and telecom, are also familiar with the kinds of legal and ethical issues brought up around AI. “They are going to be looking at data governance sooner in the process than those in other industries, in response to the European GDPR [General Data Protection Regulation, which applies to any company with European customers] and other kinds of requirements. They are already answering the questions about where their data is coming from and whether they are collecting personal data, so they understand the need to get ahead of governance and ethical issues concerning AI.”

The Federal Trade Commission Act, which broadly deals with unfair and deceptive practices, is also being applied to AI applications. Werbach says the current FTC is using it to address “anything dealing with bias and discrimination. The EEOC [Equal Employment Opportunity Commission] has already brought cases about algorithmic discrimination. For example, a company that was hiring English-language tutors for people in China used an algorithm that excluded applicants over the age of 55. They claimed it was the algorithm’s fault, but they were sanctioned for it.”

The second reason to get ahead of AI accountability is to avoid negative publicity and embarrassment. “Nobody wants to have egg on their face,” says Werbach. “Cases of bias are very embarrassing to companies, and there are many lawsuits covered in mainstream media that have already been filed about copyright, licensing of data, and privacy going through the courts. But even if you’re not violating any laws, you want to avoid situations that could erode consumer trust and generate negative publicity, like building a chatbot that says something wrong or crazy, or that yells at a customer.”

Werbach illustrates the third reason with a familiar joke: two people are on a camping trip when they hear a bear. One of them starts putting on their running shoes, and the other person says, “You're not going to outrun the bear.” The first one replies, “I don't have to outrun the bear. I just have to outrun you.” Werbach says, “Companies need to make sure they’re not in the bottom quartile regarding AI accountability, because that’s who the regulators are going to go after. You want to be able to say, ‘Here's our program. It might not be perfect, or 100 percent effective, but we can show you what we're doing.’”

If there’s any doubt that companies are heading in the direction of accountability, a recent survey of S&P 500 and Russell 3000 companies by the Weil, Gotshal & Manges law firm can put it to rest: more than 40 percent of companies in the S&P 500 and 30 percent in the Russell 3000 included AI-related disclosures in their annual reports on Form 10-K.

“There is no doubt that an AI regulatory tidal wave is coming,” says Werbach. “You want to start building up structures and processes around AI now. If you wait until the last minute, you will have to rush to put something in place, potentially exposing your firm to legal liability. And with consumer trust already dipping, that kind of haphazard effort could cause it to erode further.”

To prevent those last-minute, expensive, and arbitrary endeavors, Werbach is serving as academic director of the new Strategies for Accountable AI program.  Offered live online over eight weeks, it provides a holistic, interdisciplinary exploration of current AI uses and their legal and ethical implications. Participants will learn from and interact with faculty experts in management, marketing, law, technology, and analytics.

Werbach also just launched a podcast — The Road to Accountable AI— that explores these topics and latest developments. “Every organization is going to be spending millions, if not billions, of dollars on AI governance just as they're spending it now on data protection and privacy. They need to be educated about how to think about it and spend those dollars wisely,” says Werbach.