Close

Four AI Questions Organizations Need to be Discussing

By Scott Kirsner |  May 7, 2024
LinkedInTwitterFacebookEmail

At the Leading with AI conference today at Harvard Business School, the kick-off keynote was from Ethan Mollick, a professor at the Wharton School of Business in Philadelphia. Mollick is also the author of the recent book Co-Intelligence: Living and Working with AI.

Mollick says that there are four questions to be discussing inside organizations right now, as they look to understand how AI will impact them — and how they can benefit:

  1. What useful thing that you do is no longer valuable?
  2. What impossible thing can you do now?
  3. What can you move downmarket or democratize?
  4. What can you move upmarket or personalize?
Ethan Mollick addressing the Leading with AI conference.

Other highlights from Mollick’s talk:

• “Everybody is releasing [new AI technologies] all the time – there’s no instruction manual,” Mollick said. And the developers of AI platforms “have not thought about your use case. You get to be the expert, because we’re all figuring this out together.” Don’t look for the master plan; there isn’t one.

• Who will be most affected by AI work? Your overlap with AI is almost directly correlated with your level of education, creativity, and salary, Mollick said, before adding, “disruption doesn’t necessarily mean replacement…. but we’re going to have to reconfigure work.” Some of the professions expected to be least impacted: professional athletes, professional dancers, roofers, and ditch diggers.

• If you haven’t used the paid version of ChatGPT (ChatGPT-4), you don’t realize the full set of capabilities. Mollick says most people he speaks to say they haven’t yet spent 10 hours or more working with that advanced model.

• As part of his research, Mollick and his collaborators asked several hundred consultants at Boston Consulting Group to use GPT-4 for an array of consulting tasks; a control group didn’t use AI. The group using AI did work that was 40% higher quality, 26% faster, and produced 12.5% more work when compared to the control group. By comparison, Mollick said, “When steampower was put into a factory in the early 1800s, it improved performance by 18 to 22 percent. These are fairly large, industrial revolution-style numbers we’re dealing with.”

• AI in organizations may level the differences between lower-half performers and higher performers (its use may particularly benefit lower performers.)

• One urgent question for the current moment, according to Mollick: “How do we fit humans and machines together in a world where this stuff is evolving?”

• “If you’re not using AI for ideation right now, you’re already leaving value on the table,” he said. At a Wharton entrepreneurship class, two professors asked students to generate 200 ideas, and then had ChatGPT-4 generate 200 ideas. Outside judges used willingness to pay as a criteria, and of the top 40 ideas, 35 came from ChatGPT-4, five from the students.

• Research is finding that AI may be better at persuading people to change deeply-held beliefs that conversations with other people. It’s entirely possible, Mollick said, that we’ll soon encounter vending machines that can convince you about what product to buy.

• You should be using the frontier models: GPT-4, Gemini 1.5, Claude 3, and Llama 3 300B (coming soon from Meta). Mollick described those models as “generalists” – they do many things very well.

• The doubling time on AI performance is 5-12 months right now. No one can predict how long that will go on, or whether it will slow down, Mollick said.

• Inconsistency is a feature of all AI platforms. You don’t always get the same result from the same prompt. But the quality of the output can be solid, Mollick said. Often, if he were grading it, he says it’d get an A-, not an A+.

• How to think about work differently: divide tasks into those that should be human-only, centaur / cyborg / hybrid tasks, or those that can be delegated to AI.

• “You should expect to see autonomous AI agents this year – at scale,” Mollick said. One example he showed was Devin, an AI software engineer. You can give it goals, like “create a tool that takes 10Ks filings and does sentiment analysis on them using AI.” It will ping you with questions. This idea of delegating out work fundamentally changes how we relate to these AI systems, Mollick said. It just does the work quietly for you while you do other things. “Once you have autonomous agents in the world, things change very dramatically.”

• These systems have a lot of latent use cases. But it’s also bad at things you wouldn’t expect it to be bad at. How do we deal with a system that can’t count the number of words in a sentence, but can write a sonnet?

• We’re all going to suffer from the “secret cyborg” problem. Everybody is secretly using AI in organizations — even those that have banned AI. As an example: a bank executive Mollick spoke to relied on ChatGPT to write a document banning ChatGPT at the bank.

(Featured image by Gabriel Gusmao on Unsplash.)

LinkedInTwitterFacebookEmail