Wharton@Work December 2024 | Strategy Leading with AI: What Could Possibly Go Right? In his new book, LinkedIn co-founder and guiding force in the AI revolution Reid Hoffman asks the provocative question, “What could possibly go right?” Superagency (Author’s Equity, January 2025) explores an AI-powered future through an optimistic lens, revealing many of the opportunities large language models (LLMs) like ChatGPT can engender, with immense potential to give rise to business transformations and provide “superagency” for the leaders who guide AI efforts within their organizations. Describing AI as a “supertool,” Hoffman says the technology can be leveraged in two ways: work closely with it (by learning a new language or practicing mindfulness, for example) and delegate tasks, such as optimizing home energy use. “In either case,” he explains, “AI increases your agency. It helps you take actions that lead to outcomes you desire. AI may change aspects of our lives in ways that may feel uncomfortable or even threatening, but we should also recognize the heroic gains in human capability — what I call ‘superagency.’” Hoffman defines superagency as “what happens when a critical mass of individuals, personally empowered by AI, begin to operate at levels that compound through society. In other words, it’s not just that some people are getting superpowers — becoming more informed and better equipped thanks to AI. Everyone is, even those who rarely or never use AI directly.” Hoffman is bringing that conversation to Wharton Executive Education in January. Leading an AI-Powered Future, Featuring Reid Hoffman begins with a session led by Hoffman, followed by five additional sessions taught by faculty experts from the Wharton AI & Analytics Initiative. They include marketing professor Stefano Puntoni, co-author of Decision-Driven Analytics: Leveraging Human Intelligence to Unlock the Power of Data. “The new program isn’t just about understanding AI — it’s about mastering its transformative potential to drive innovation, competitiveness, and ethical leadership in today’s dynamic business landscape,” Puntoni says. “It will have the optimistic vibe that follows from Reid’s book, but it’s also very much aligned with the way that I and others at Wharton and throughout the world think about the technology. We shouldn't dismiss the risks, but we should be confident and realistically optimistic that AI is driving business and societal transformations that will benefit us all.” The new program represents a first for the school, offering two options for participants. The first tier requires an application for one of 96 seats in the virtual WAVE classroom, a state-of-the-art platform that replicates Wharton lecture halls. WAVE participants will have direct engagement with Hoffman and Wharton faculty and foster meaningful connections within a curated peer network. The second tier provides a front-row seat to all six program sessions in real time via livestream, plus access to an expert-moderated Slack channel to share insights and discuss applications in real time. Dispelling Fears While Building Skills Program participants will learn how to develop the capacity to explore the possibilities for AI to enhance the future of their organizations while navigating workforce resistance and fears around AI. They will also build the skills required to lead AI integration and create a framework for leveraging AI to enhance individual agency and decision making throughout their workforce. And recognizing the ethical implications and responsibilities of deploying AI technologies, the program explores how participants can ensure AI’s alignment with their organizational values and consumer expectations while complying with legal and regulatory standards. Puntoni says a standout issue for those integrating AI into their organizations is “capitalizing on the positive aspects of AI integration while also acknowledging the dangers. We are going to explore the upside of AI and how we can responsibly deploy it to achieve transformational results for both companies and society. For those who are worried about bringing something to a workflow or to market, it’s critical that you understand and either avoid the risks or manage them as much as possible.” “One issue that we see with AI that hasn't been as prominent with other technologies,” he continues, “is a tone of conversation that is quite negative, and even hysterical, in my opinion. Some of it is fostered by the very people who are at the top of this industry. And while I understand the importance of keeping an eye on the trajectory of this technology, the extreme worst-case scenarios about making people obsolete are not helpful. I think it’s a misguided discussion, because most of the impact of AI will be in human augmentation.” Anticipating Regulations Puntoni says while regulation is needed, “you cannot expect regulation to move at the pace of this technology. It cannot and it should not. Regulation has to follow from deliberation and consultation, so it has to take some time. That means there are additional concerns for leaders to be responsible in AI adoption because there are many things we're still figuring out.” “But that said, this technology is incredible in many ways,” says Puntoni, “and ultimately, it's got to be good to have more intelligence. There's no scenario in which we're better off with less intelligence. The question is what we do with it.” Or as Hoffman would ask, “What could possibly go right?” Share This Subscribe to the Wharton@Work RSS Feed