I am looking at a chart that tracks income per head over time. It is a more or less flat line between 1000BC and the late 1700s. To repeat, worldwide living standards stagnated for almost three millennia. Then: industrialisation. Incomes shoot up. The chart could be the ECG readout of a total goner of a patient who then makes an eleventh-hour comeback from death.
So, be doubtful when someone likens AI to the industrial revolution in importance. It will do well to match even the telephone and the incandescent lightbulb. (Incomes really surged as 1900 approached.) Perhaps the test of AI isn’t economic, though. Perhaps the test is quality of life. Well, before the phonograph, your favourite piece of music was something you only ever heard a few times, when an orchestra passed through town and fancied playing it. Before air travel, crossing an ocean was a Homeric saga. Now it is easy. AI will be as life-enhancing as these inventions, will it?
I so want to side with the AI sceptics. But look at their (my) own intellectual howlers. The two paragraphs above are too “inductive”: too reliant on the past as a guide to the future. There is also no technical detail because, unlike most of those who talk up AI, I don’t work in or around the field. And there are even worse AI-sceptic arguments. At least I didn’t lapse into anecdote, of the “ChatGPT told me to take heroin as a cold cure” variety.
As for the sensible line on AI, “wait and see”, that could be said about anything. It doesn’t tell investors what to do, or citizens how to prepare for the future.
The debate often pits the informed but hysterical against the measured but generalist
In the end, there is just nothing very interesting to say about AI. There is lots of superb reporting. The major companies, the national strategies, the tech itself: keep abreast of it all. But when it comes to rumination and prognostication — the world of columnists and panel events — has there ever been a discourse so weak relative to its overall scale?
The hype merchants are too close to the subject to see it straight. Whether or not they have a commercial incentive to talk up AI (many don’t), people who devote their lives to something will naturally resist the idea that it might be of just moderate importance. At the same time, it is hard to argue against them without falling back on precedent and eternal verities. Just because most historic turning points end up being no such thing does not mean this is such a one. The AI debate often pits the informed but hysterical against the measured but generalist.
Worse, we probably aren’t even going to know who was right. Episodes of The Simpsons from the 1990s patronise the internet in a way that now seems mortifying. But the writers could mount a defence. Without reviving the Solow paradox (“You can see the computer age everywhere but in the productivity statistics”), US GDP growth is not higher than it was in the pre-internet decades. Much of what we do, such as travel, has changed little. The episodes, while dated, are not falsified.
Here’s a thought: the worst-case scenario is that AI destroys a significant but not huge share of jobs. In that world, there would be lots of victims but not enough to form an electoral plurality that could vote for universal basic income or the like. In other words, if AI sceptics are right (and the technology has a less than sweeping impact), then AI alarmists will be right (that social strife is coming). Who would have won the argument?
I have found there to be just one useful feature of the AI discourse. It reveals a person’s existing temperament. The people I know who think AI will be seismic and disastrous are the most highly strung anyway. The ones who think it will be seismic and life-improving are the most chipper and prone to believing in things. (Tony Blair.) Those who doubt it will be seismic at all are people like me, who are even-keeled to the point of complacency. The AI hubbub goes on and rancorously on because it is, in the end, about us.