Talk to executives and before long they will rhapsodise about all the wonderful ways in which their business is using artificial intelligence. Jamie Dimon of JPMorgan Chase recently said that his bank has 450 use cases for the technology. “AI will become the new operating system of restaurants,” according to Yum! Brands, which runs KFC and Taco Bell. AI will “play an important role in improving the traveller experience”, says the owner of Booking.com. In the first quarter of this year executives from 44% of S&P 500 companies discussed AI on earnings calls.
Whatever the executives may say, however, AI is changing business much more slowly than expected. A high-quality survey from America’s Census Bureau finds that a mere 10% of firms are using it in a meaningful way. “Enterprise adoption has disappointed,” notes a recent paper by UBS, a bank. Goldman Sachs, another bank, tracks companies that, in the view of its analysts, have “the largest estimated potential change to baseline earnings from AI adoption”. In recent months the firms’ share prices have underperformed the market. With its fantastic capabilities, AI represents hundred-dollar bills lying on the street. Why, then, are firms not picking them up? Economics may provide an answer.
Of course, it is still early days. Putting AI to use requires dealing with frictions, such as datasets that are not properly integrated into the cloud, meaning some lags were to be expected. AI diffusion has, though, disappointed even these more modest expectations. Analysts at Morgan Stanley once said that 2024 would be “the year of the adopters”. That came to little. This year was supposed to be “the year of agents”, involving autonomous systems that perform tasks based on data and predefined rules. But, according to the UBS paper, 2025 is instead “the year of agent evaluation”, with companies merely dipping their toes in the water. Perhaps there are deeper reasons for the disconnect between C-suite enthusiasm and sluggishness on the shop floor.
Economists of a “public choice” persuasion have long argued that government officials behave in a manner which maximises their personal gain, rather than furthering the public’s interests. Bureaucrats may refuse to implement necessary job cuts if doing so would put their friends out of work, for instance. Companies, especially large ones, may face similar problems. In the 1990s Philippe Aghion of the London School of Economics and Jean Tirole of Toulouse 1 Capitole University distinguished between “formal” and “real” authority. On paper, a chief executive has the power to mandate large-scale organisational change. In practice, the middle managers who understand the nitty-gritty and control day-to-day implementation of projects hold the real authority. They can shape, delay or even veto any change requested from above.
Public-choice dynamics are often at play when companies consider adopting new technologies. Joel Mokyr of Northwestern University has argued that “Throughout history technological progress has run into [a] powerful foe: the purposeful self-interested resistance to new technology.” Frederick Taylor, an engineer credited with introducing proper managerial techniques to America in the late 19th century, complained that power struggles within firms often jeopardise the adoption of new tech.
More recent research finds that these conflicts remain alive and well. In 2015 David Atkin of the Massachusetts Institute of Technology, and colleagues, published a paper examining factories in Pakistan that made footballs, discussing the fate of a new technology which reduced wastage. After 15 months, they found take-up remained “puzzlingly low”. The new tech slowed down certain employees, who as a result stood in the way of progress, “including by misinforming owners about the value of the technology”. Another paper, by Yuqian Xu of the University of North Carolina, Chapel Hill, and Lingjiong Zhu of Florida State University, found similar battles between workers and managers in an Asian bank that is trying to automate its activities.
Few economists have yet examined intra-company battles over AI, but it seems likely they will be fierce. The modern firm in a rich country is astonishingly bureaucratised. American companies have 430,000 in-house lawyers, up from 340,000 a decade ago (a growth rate much faster than that of overall employment). Their role is generally to stop people doing things. They may worry about the risks of introducing AI products. With little to no case law, who is liable if a model goes wrong? Close to half the respondents to UBS’s surveys say that “compliance and regulatory concerns” are one of the main challenges for AI adoption in their company. Other legal eagles fret about the tech’s impact on boring things such as data privacy and discrimination.
People in other roles have their own concerns. HR staff (whose numbers in America have swollen by 40% over the past decade) may worry about the impact of AI on jobs, and thus put up roadblocks in front of adoption programmes. Meanwhile, Steve Hsu, a physicist at Michigan State University and an AI-startup founder, argues that many people behave like Pakistani football-makers. Middle managers worry about the long-term consequences of adopting AI. “If they use it to automate jobs one rung below them, they worry that their jobs will be next,” says Mr Hsu.
The tyranny of the inefficient
Over time market forces should encourage more companies to make serious use of AI. As with previous new technologies, such as the tractor and the personal computer, innovative firms ought to outcompete the holdouts and eventually put them out of business. Yet this process will take a while—too long, perhaps, for the big AI companies, which need to make huge profits on their investments in data centres. The irony of labour-saving automation is that people often stand in the way. ■