Wharton@Work

July 2022 | 

Beyond the Headlines: Harnessing AI for Your Business

Beyond the Headlines: Harnessing AI for Your Business

Recent news about Google’s AI chatbot sounds like a Hollywood sci-fi script. Can a machine that was programmed by humans to converse with other humans become sentient, able to think and feel beyond what it has been tasked with doing? That’s the contention of a Google engineer, who as of this writing is on administrative leave. Stories like this, and high-profile coverage of AI bias that can lead to unlawful decisions and practices, have fueled distrust of AI at the highest levels of business.

But that distrust comes at a time when companies’ need for AI and machine learning to help them make sense of mountains of data, or sort through thousands of resumes submitted for a handful of job openings, is peaking. Wharton marketing professor Raghuram Iyengar calls it a perfect storm: “There is an unprecedented talent shortage, consumer behavior is changing radically, and businesses are collecting more and more data that they need to make sense of, all of which can be addressed with the help of machine learning.” But leaders are right to be cautious, he says. They don’t need to be wary of a system “going rogue” like HAL (short for Heuristically programmed ALgorithmic computer), the antagonist in 2001: A Space Odyssey. Caution is required, though, because there are serious potential downsides and costs to consider.

Beyond Sensational Headlines: Practical Uses

Iyengar says embracing analytics (which includes the use of AI and machine learning) takes know-how, a solid understanding of one’s goal, and some courage. “The controversy about AI is nothing new. Go back to 2012 and ’13, when Google created the word2vec algorithm. They fed it an enormous number of words and programmed it to learn relationships among them over time. That led to other language models, including GPT-3 (Generative Pre-trained Transformer 3), which uses deep learning to produce human-like text. It is a natural language kind of AI, which can write unique stories and articles. Give it a few phrases, and it can come up with a whole poem.”

Early examples of AI fall under what Iyengar calls supervised learning. Machines are given a lot of data and they learn certain rules to deal with that data. They can then apply those rules to new, similar data. These machines can be much more efficient and make fewer errors than a human over time on certain routinized tasks. “When you call an airline, for example, you don’t need a person to tell you what time your flight is, or whether it is delayed. A computer can do well with those kinds of calls,” he says.

“Many mundane, routine tasks, like airline calls, can be offloaded,” Iyengar continues. “But imagine you are trying to figure out whether there is an abnormality on an MRI. People from the machine-learning side are saying, ‘Radiologists are going to be out of a job.’ But on the other side, when you talk to radiologists, their view of the world is that yes, tasks that involve looking at a single picture may be eliminated. But that means a radiologist can focus on tasks that are much more involved, and perhaps much more satisfying – looking at many pictures and an overall case. When the simpler tasks are taken care of, the hope is that the human expert is free to spend more time where their expertise is required. I think that harder task will still remain.”

Iyengar works with business leaders interested in harnessing the power of AI to solve some of their current challenges in the Analytics for Strategic Growth: AI, Smart Data, and Customer Insights program, of which he is academic director. “We talk quite a lot in class about these developments in terms of business decisions. There are many companies, small and big, that are figuring out what functions can be automated and what functions still require a human touch. They are looking at AI as a way to improve their business. News reports can sensationalize some of the advancements in the technology, but there are already some very practical applications.”

Supervised Learning versus Reinforcement Learning

The current controversy about AI, explains Iyengar, centers around how much of AI output is due to being trained versus “machines learning on the fly.” This latter point, also called reinforcement learning, is becoming increasingly important. Just like training a dog, programmers base machine learning on rewards and punishments. Over time, the machines do things that they have not been instructed to do because they're figuring out what previously got rewarded and what didn't.

That means even if a machine can’t become a sentient being, it can go beyond the task that it has been instructed on. Iyengar says that’s because they can now be trained for many tasks at the same time, and potentially use that training to address a new challenge. “This idea that a machine can do more than just one task is becoming very popular,” he explains. “When people start to think about the kinds of tasks that can be automated within a business, they are perhaps ready to explore a larger set of activities for a variety of reasons.”

Responding to a New Customer Journey

Those reasons include the talent shortage brought about by the Great Resignation (or the Great Reshuffle, as some have said), unprecedented amounts of data being collected, and the pressing need to understand and respond to drastic changes in customer behavior — all of which can be addressed with the help of AI.

Consider that almost everything companies knew about their customers and their behavior has had to be revised since 2020. Pandemic-related shutdowns forced them to try new models, including novel ways to sell and deliver goods and services. Iyengar says that because we don't know whether today’s customer journey might revert to what it was before the pandemic, or if there is a more permanent systematic shift, there is tremendous uncertainty.

“As a consequence,” he explains, “testing and learning has become very important. More and more companies are willing to take a risk and try out new things, knowing that they might fail. In more stable environments, they would see failure as a bad thing, but now failure might be okay. It shows which directions not to go in. That's good to know in a small-scale test, as opposed to investing a lot of money and finding out later it doesn’t work. The appetite for risk seeking has certainly gone up. It’s a big shift.”

The uncertainty and risk taking also provides an opportunity for start-ups and smaller companies to take on the incumbents in their markets. Iyengar recently participated in a virtual interview with senior leaders of Perdue Farms, Valley Bank, Converse, and Kraft Heinz, and says, “Especially during the pandemic and post pandemic, they have all become very conscious that new competitors might be creeping in. Start-ups don’t have the burdens that come with being a larger legacy company in terms of the product lines, inventory, and other kinds of issues. They can be more nimble in their responses to changing customer behavior and in how they embrace analytics, AI, and new technologies.”

Think Before You Invest

Today, developments in AI and machine learning are expanding the horizons of what is possible. Anyone making decisions about human resources or about strategic investments in technology has to stay on top of these developments. But, says Iyengar, who serves as faculty director of Wharton Customer Analytics, “When you think about adopting AI, new machine learning models, and so on, it’s important to have a good framework beforehand. Understand the kinds of functions you want to use the models for and what the impact would be. Who would be affected and how, in terms of human resources? What are the kinds of decisions that you need to make, and which would be most helped by these machine learning tools?”

“You need to think first about why you want to make the change, and what would happen with this change. Confirm that there is actually a need, and then look at how to get there,” Iyengar says. “Many people put the cart before the horse, and they suffer from what I call SOS: shiny object syndrome. They say, ‘Everybody's doing it. We’ve got to do it. We need to be a first mover.’ The reality is that first movers don’t always plant the flag. Sometimes they may actually have a disadvantage because they have used certain technology or a certain type of data that may end up with sub-optimal decisions. The prudent way to go is to have some thought and structure before adopting any new technology, and take a test-and-learn approach.”