Hi everyone,
Later this month, I’ll be on a panel in Savannah, Georgia, for a Liberty Fund colloquium on “AI and Technosocialism.” This will be my second time participating—my first was nearly twenty years ago at a scenic retreat outside Tucson, Arizona. This year’s discussion centers on an intriguing book: Radical Uncertainty: Decision-Making Beyond the Numbers by economists John Kay and Mervyn King. Their focus on how we make judgments under “radical uncertainty” has important implications for AI, which is why I’m writing this post.
First, let’s look at the experts and their own biases—particularly in how they label human reasoning as “cognitive bias.” An obsession with the supposed precision of math and computation has left multiple fields, like economics, psychology, and cognitive science with reams of research purportedly establishing our “natural stupidity.” Not so. Read on.
Why Cognitive Bias Is Biased
I first encountered the “Linda problem” in a class on epistemology, part of the philosophy department’s curriculum. The “Linda problem” is one of a number of arresting examples of human stupidity offered over the decades by the late Nobel economist Daniel Kahneman and his partner, Amos Tversky, both Israeli psychologists working in America. Since the 1970s up to the 21st century, Kahneman and Tversky have left such a large footprint on thinking about thinking that you’re likely to encounter them not just in philosophy but in economics, psychology, and even classes on math and statistics. The two went mainstream in culture, too, notably by writer Michael Lewis, who wrote Moneyball and The Big Short (both were turned into major motion pictures) and brought Kahneman and Tversky to bookshelves with his 2016 The Undoing Project.
The “undoing project” is an apt description of what Kahneman and Tversky were up to. What they were intent on undoing here is the idea that humans think rationally and aren’t generally plagued by cognitive bias. Au contraire, according to K & T we don’t think rationally even when we think we do, and our mental shortcuts amount to distortions—cognitive bias—that we later rewrite to fit the facts, rather than face our own “natural stupidity.” I haven’t read The Undoing Project, but knowing Lewis’s work, I suspect it’s a compelling narrative. I also have no doubt that K & T didn’t accomplish what they’d hoped—we’re not hopelessly bias and irrational in the way they meant. The problem here is the presupposition of axiomatic rationality, appropriate for problems like games or more generally what economists call “small world” problems with known rules and constraints, and a well-defined optimal outcome—think poker, where there’s no ambiguity about the best possible hand.
Back to the Linda problem. In 1983, Kahneman and Tversky published in the journal Psychological Review a paper titled “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.” In the paper, K & T reported their study where they presented participants with the following scenario:1
Keep reading with a 7-day free trial
Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.