Larson’s Law and the New Humanism
Critics of LLMs are repeating an oft used but flawed and superficial strategy. We need a real critique.
Hi all,
A tour of the impossibly flawed modern critique of AI and a tie-in to what I hope will be the “new humanism.”
Introduction: The Recurring Cycle of Technological Doom
Over the past year, critics of large language models (LLMs) have confidently forecasted their imminent demise. They warned us of inevitable failure due to the bugbear of "approximating truth," irreparable legal quagmires, and an impending data scarcity—the so-called “data wall” followed by “model collapse.” But here we are, a year later, and LLMs continue to spread and thrive, proving the usual doomsday predictions wrong once again. What gives? Well, I’m here to explain.
First we need to understand that the pattern of overblown fears is part of a long tradition.
Larson’s Law offers a simple rule: “Anytime you say this is the end of the world, it’s never the end of the world.”
In this post, I’ll debunk the latest LLM doomsday predictions and discuss why, far from facing extinction, LLMs are adapting and advancing just like any transformative technology. More importantly, I’ll explain why critiquing LLMs in the typical manner I see online and on Substack is failing to address our need to retool and discuss a New Humanism.
Prediction 1: The “Failed Approximation of Truth” Argument
Prediction: Critics argue that LLMs are fundamentally flawed because they can only approximate truth and have no inherent grasp of it. To some, this flaw supposedly signals their eventual collapse.
Reality Check: What the critics overlook is that all machine learning approximates truth—by definition, it’s how statistical models operate. LLMs, like spam filters, image classifiers, and medical diagnostics, are built on probabilities, not absolutes. If approximation were a fatal flaw, machine learning itself would have failed long ago.
This is one of the most beguiling but, upon reflection, vacuous responses to LLMs. Yet all the big shot critics love to parade it around, again and again. (By the way, I’m sure to get a trove of email from them—really—talking down to me. I wish I could monetize getting lectured by folks who miss my point but plough on anyway.)
The supposed “failure” of LLMs to approximate truth as humans do doesn’t make them useless. It makes them powerful tools with known limitations. Just as we don’t expect a spam filter to understand email like a human, we shouldn’t expect LLMs to understand “truth” like we do. Instead, they’re tools that synthesize information at massive scales—imperfectly but effectively. Their value lies in being inductive; so too their limitations.
Prediction 2: The Data Wall and Model Collapse
Prediction: Critics argue that the supply of human-generated text data will run out quickly, and I agree that we’ll eventually face a problem if innovation doesn’t keep up. The concern a year ago (and today) was that without fresh data, LLMs would increasingly rely on synthetic data—a scenario referred to as hitting a “data wall.” In this scenario, overreliance on synthetic data would lead to “model collapse,” where models gradually degrade as they train on artificial inputs rather than real-world content. We eventually have a failed technology.
The data wall and model collapse are the technological equivalents of calamity. What’s the truth?
Keep reading with a 7-day free trial
Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.