The Folly of Prediction
Why do we keep trying to predict the future? Experts are usually wrong.
Hi everyone,
A section of this is from my book, and I’ve added some updated comments on large language models (LLMs) and commonsense. At some point it seems I’ll need to write a second edition of the Myth, as the recent successes of LLMs have somewhat changed the discussion about natural langauge understanding—though see below.
Erik J. Larson
Experts and even (or especially) scientists love to make predictions, but most of them are wrong. Dan Gardner’s excellent book Future Babble documents the success rate of predictions in realms from history and geopolitics to the sciences. He found that theorists— experts with big visions of the future based on a particular theory they endorse—tend to make worse predictions than pragmatic people, who see the world as complicated and lacking a clear fit with any single theory. Gardner referred to the expert class and the pragmatic thinkers as hedgehogs and foxes (borrowing from Philip Tetlock, the psychologist, who himself borrowed the terminology from Isaiah Berlin). Just as a hedgehog burrows into the ground, hedgehog experts burrow into an idea. Inevitably they come to believe that the idea captures the essence of everything, and that belief fuels their inevitable proselytizing. Marx was a tireless hedgehog. Foxes see complexity and incalculability in the affairs of the world, and either avoid bold predictions or make the safer (and perhaps smarter) prediction that things won’t change the way we think. For the fox, the business of predicting is almost foolhardy, because we really can’t know what will emerge from the complicated dynamics of geopolitics, domestic politics (say: who will win an election?), science, and technology. As the nineteenth-century novelist Leo Tolstoy warned, wars unfold for reasons that we can’t fit into battle plans.
Some AI scientists are notoriously foxy about AI predictions. Take Yoshua Bengio, a professor of computer science at the University of Montreal, Canada, and one of the pioneers of deep learning: “You won’t be getting that from me,” he says, in response to the question of when we can expect human-level AI: “there’s no point. It’s useless to guess a date because we have no clue. All I can say is that it’s not going to happen in the next few years.” (Bengio made this comment in 2018. It appeared in Martin Ford’s interviews of AI experts in Architects of Intelligence.) Ray Kurzweil gives a more hedgehog answer: human-level AI will arrive in 2029. He invokes his “law” of accelerating returns to make his prediction seem scientific, and he sees continuing evidence that he’s right in all the supposed progress to date.
Philosophers sometimes have the virtue of thinking clearly about problems precisely because they are unencumbered by any particular zeal that might attach itself to practitioners in a field (who wish still to philosophize). Alasdair MacIntyre, for example, in his now classic After Virtue, pointed to four sources of fundamental unpredictability in the world. In particular, his discussion of “radical conceptual innovation” is directly germane to questions about when human-level AI will arrive. He recalls the argument against the possibility of predicting invention made by twentieth-century philosopher of science Karl Popper:
Some time in the Old Stone Age you and I are discussing the future and I predict that within the next ten years someone will invent the wheel. “Wheel?” you ask. “What is that?” I then describe the wheel to you, finding words, doubtless with difficulty, for the very first time to say what a rim, spokes, a hub and perhaps an axle will be. Then I pause, aghast. “But no one can be going to invent the wheel, for I have just invented it.” In other words, the invention of the wheel cannot be predicted. For a necessary part of predicting an invention is to say what a wheel is; and to say what a wheel is just is to invent it. It is easy to see how this example can be generalized. Any invention, any discovery, which consists essentially in the elaboration of a radically new concept cannot be predicted, for a necessary part of the prediction is the present elaboration of the very concept whose discovery or invention was to take place only in the future. The notion of the prediction of radical conceptual innovation is itself conceptually incoherent.
In other words, to suggest that we are on a “path” to artificial general intelligence whose arrival can be predicted presupposes that there is no conceptual innovation standing in the way—a view that even AI scientists convinced of the coming of artificial general intelligence and who are willing to offer predictions, like Ray Kurzweil, would not assent to. We all know, at least, that for any truly general computational intelligence there must be an invention or discovery of a commonsense, generalizing component. I’ve sometimes referred to this as the system having a “world model” or in other words a conceptual (rather than probabilistic) knowledge of the world. This certainly counts as an example of a “radical conceptual innovation,” because we have no idea what this is yet, or what it would even look like.
Oddly, the recent buzz about Large Language Models like GPT 4 are counterexamples to conceptual knowledge or “world knowledge” (or “commonsense knowledge”), because they achieve their admittedly impressive results without actually knowing—which is why they sometimes veer off into bizarre confabulations and hallucinations without skipping a beat. Lacking actual knowledge of language and the world language describes, the impressive probabilistic performance is actually an impressive simulation. It’s not the real thing, which is why you won’t see generative AIs like LLMs in situations where public safety is a concern (as in self-driving cars), or in critical applications like autonomous navigation or decision making for, say, the military. The notion that we could use an LLM to make decisions about launching tactical nukes is dead on arrival. Who would sign off on that? A bell curve intelligence that mostly works but sometimes goes positively insane isn’t a reasonable candidate for pushing the button.
Systems without real understanding have limited application, in other words. We’ll see if Microsoft can sell its Copilot technology using ChatGPT as a productivity enhancement. I’m frankly sceptical, because any important results—like say a presentation to shareholders—must be reviewed by a human. That’s time and money. We can’t put our important presentations and speeches and memos on autopilot. And we can’t predict when AI will truly understand what it’s doing. What we do know with certainty is that we are not there yet.
When you start asking questions like “who would sign off on that?” ... a bunch of people come to mind! 🫣
Hi Erik. I agree that AGI is an illusion. I have been reinterpreting C. S. Peirce #Pragmaticism for quite some time. I claim we need expert conceptualizers of #TheWealthOfGlobalization that follow me as the #HomoPragmaticist pioneer (or somebody else that preceded me) as the #PathCreator.
Under your LinkedIn post “… the role of AI and science,” I shared “Philanthropy for
#AGlobalSystem ( https://gmh-upsa.medium.com/philanthropy-for-aglobalsystem-f64dcf099c0e )” is the way to revolutionize science.