Generative AI Was Supposed to Collapse… So, Why Hasn’t It?
Critics get away with alarmism in the name of science. Let's look at the facts.
Hi all,
I wrote two versions of the post I’ve published earlier today, which is for paid subscribers. This version I initially sidelined, but in thinking about the problem, we’re treading water until we get on with it. I respect
and (though I’ve never met her) Emily Bender—she’s UW Seattle, where I finished my math major!—but that’s not the point. I’m worried that the loudest voices are saying the wrong things. I’m frequently lumped in with them, and while I have had much to say that’s cooky-cutter negative, I think we need to accept that LLMs are here and see how we can use them to benefit. We also need to figure out what it means to extend a critique beyond just the sheer thrill of bitching about Sam Altman and the endlessly imminent demise of OpenAI.I’ve thought long and hard about this, so I hope you hear me out. This is a" “public service announcement” free to all. The prior paid content post has other points.
Here we go, and I only point out names because I don’t know how else to sharpen the pencil as it were.
Gary Marcus and other outspoken critics like Emily Bender, and (to some extent) Meredith Whittaker are constantly warning us of the impending doom of generative AI. According to them, large language models (LLMs) like ChatGPT were destined to bring about either financial ruin, societal collapse, or at the very least, an embarrassing flop. So, what happened? Did the sky just forget to fall? Let’s break down some of their greatest hits in doomsday predictions and see how the world had other plans.
“OpenAI’s $150B Ambition Will Be Its Undoing!”
In Five Reasons Why OpenAI's $150B Financing Round Will Be Tricky, published just two months ago in September 2024, Marcus laid out his case that OpenAI’s massive financing ambitions were a disaster waiting to happen. He warned that the pursuit of a $150 billion valuation would burden OpenAI with unsustainable expectations, leading to financial woes that could cripple the company.
Updated Reality Check: Apparently, OpenAI missed the memo about its supposed impending collapse. In October 2024, OpenAI actually closed a record-breaking $6.6 billion funding round led by Thrive Capital, with participation from Microsoft, Nvidia, SoftBank, Khosla Ventures, and others. The company’s valuation rocketed to $157 billion—up from $80 billion earlier this year and $29 billion in 2023. Far from struggling, OpenAI secured a $4 billion revolving credit line from banks like JPMorgan Chase and Goldman Sachs to ensure stability and growth. Financial strain? OpenAI’s on a path to further expansion, not disaster.
“Things Are About to Get a Lot Worse!”
Then there’s Things Are About to Get a Lot Worse, December 2023, where Marcus warned of an impending storm for generative AI. According to him, the models were on the brink of creating “havoc,” with misinformation at an all-time high, legal battles multiplying, and the technology struggling to keep up with itself. To hear him tell it, we were seconds away from an AI-fueled disaster.
Reality Check: Spoiler alert—things didn’t actually get a lot worse. Instead, AI companies have rolled out responsible-use policies, improved content moderation, and steadily improved their models. Sure, misinformation is a challenge, but society is still intact, and generative AI has become more embedded in practical applications than ever. Funny how the promised “chaos” just looks like… productivity.
“Generative AI is Desperately Clawing for Relevance!”
In The Desperate Race to Save Generative AI, January 2024, Marcus painted a picture of a struggling industry on its last legs, frantically trying to stay relevant before it falls into obscurity (see a pattern, anyone?) He described this as a last-ditch effort to salvage technology that, according to him, had peaked and was on its way out.
Reality Check: The “desperate race” seems to be a comfortable jog. OpenAI, Anthropic, and other generative AI companies have been pulling in record investments and expanding into every industry imaginable. Far from being on the brink of collapse, generative AI is now a fixture in education, healthcare, and entertainment. “Clawing for relevance”? More like carving out an empire.
Bender’s Question: Are We Just “Stochastic Parrots”?
Unlike Marcus’s more headline grabbing warnings, Emily Bender at the University of Washington Seattle offers typically a more traditional academic critique. In an early 2021 paper, she famously raised concerns about LLMs being “stochastic parrots”—machines that mimic language without true comprehension. Bender’s critique isn’t about total collapse but rather a cautious reminder that LLMs, though impressive, are fundamentally limited by their lack of understanding. Her worry? That society might take these models for more than they are and let performance overshadow the gaps in true machine understanding.
Reality Check: Bender’s concerns are well-founded, especially when it comes to big picture questions of AI inference capabilities and the philosophy of AI. But for the millions using generative AI every day, it turns out that utility often outweighs understanding (calculators don’t understand, but we use them, no?). Users aren’t as concerned with whether the AI “gets” language on a deeper level; they care that it works effectively as a tool. While Bender’s point about “stochastic parroting” is valid, society has found practical value in these tools, despite their limitations.
“Democracy Can’t Survive This Tech!”
Marcus, Bender, and other outspoken critics have sounded the alarm that generative AI is a threat to democracy, warning of a dystopian future where misinformation floods the internet and free societies crumble. They have predicted an Orwellian nightmare where LLMs fuel propaganda and shake the foundations of truth.
Reality Check: While misinformation is a real risk, democracy seems to be holding up. Generative AI companies have implemented safeguards, including content filters and responsible-use policies, and governments are stepping in with regulatory measures. Society is proving resilient, using generative AI for everything from school projects to business plans. Sorry, but the Orwellian nightmare hasn’t materialized.
“Soon, AI Will Be So Common It’ll Be Worthless!”
Critics sometimes argue that AI will lose its value as it becomes more widespread, turning into a commodity with no real competitive edge. According to him, generative AI’s ubiquity would lead to irrelevance and a sharp decline in value.
Reality Check: Far from worthless, generative AI has embedded itself in high-value industries. Custom applications are thriving in sectors like law, medicine, and finance, with companies paying top dollar for models tailored to their needs. Instead of commoditization, AI’s success lies in specialization. Rather than becoming a cheap throwaway, it’s proving indispensable. The jury, of course, is still out. But the alarmism and overblown claims should be “out,” too. It helps nothing. It’s at best a half-truth. And history suggests it will become less true over time rather than more.
“Mass Unemployment Is Coming!”
Lastly, the critics warn of a jobs apocalypse, citing scary and speculative studies by the World Economic Forum and others. Generative AI, they claim, would sweep through industries, leaving mass unemployment and economic chaos in its wake.
Reality Check: While AI has changed some roles, it hasn’t created mass unemployment. OECD reports there is no slowdown in labor demand. An MIT and Boston University study reports that AI adoption has led to job reallocation withing firms rather than job destruction. And a much read McKinsey report sees automation along with the creation of jobs and increases in productive. Sounds terrible!1
Here’s the picture that’s actually emerging: workers are using AI as a productivity booster rather than a replacement, and new roles in AI ethics, oversight, and management have emerged. AI is transforming work, indeed, but it isn’t erasing it. The employment apocalypse turned out to be more of an evolution. The alarmists are…. just that.
The Erosion of Human Agency and the New Humanism
I’ll end here on a positive: I value the work that Gary Marcus, Emily Bender, and other critical voices are doing. Their clarion warnings about AI are essential. But I find it troubling that ostensible computer scientists find no virtue in AI technologies that solve problems we couldn’t solve before. Since the first taste of the GPT model in the ChatGPT application back in 2022, I’ve found this omission weird, and telling. It’s as if Marcus et al want to return to the “real” AI that didn’t work. Is “symbolic” somehow ipso facto better? That’s myth, not science. The truth is that symbolic systems aren’t even an approximations to truth. They’re a bunch of symbols that algorithms push around with probably less meaning than large scale inductive projects. We see in symbols, yes. Do computers? Not in the “Good Old Fashioned Artificial Intelligence” way. I could go on, as I worked at the top firms in that area and have a lifetime of stories about how those methods underperform and suck.
We have error rates, yes, and I’m constantly involved in discussions about how to mitigate error. But it mystifies me that critics think the tech that does better than before by the same criteria is now the enemy. Was there something other than progress on AI in mind all along? Or do they just not like Sam Altman, or OpenAI, or what have you? As a scientist I don’t get it. When I was working on “basic” problems like word sense disambiguation and sentiment analysis in 2021, we used BERT—a “precursor” model—and saw a twenty percent increase in accuracy on your test datasets across the board. That’s like having your buddy run the 100 meter dash and then Usain Bolt shows up. Do you really want your buddy still?
But here’s the larger point I believe we’re missing: the trajectory of all mass-produced technology, including generative AI, is shaping a “mega-trend” that slowly erodes human autonomy and agency. Our worry is not just whether OpenAI hits its next valuation target or if Sam Altman’s ambitions come true. It’s about something far more consequential—our shared capacity to live and love with as much freedom as possible within the bounds of a civilized world.
In my view, the problem is that some of our smartest critics are obsessing over the personalities and funding rounds of AI executives. Nothing much is gained by letting resentments over deep pockets and funding become the primary muses for serious thinkers. What we need now are bigger, bolder voices who can frame the conversation around the true stakes—our ongoing striving as humans toward excellence and the preservation of our humanity within a technologized world.
As I see it, the “good” critique about all this “terrible” LLM stuff is becoming the enemy of the great. We’re slowing true change by clogging up communication channels with narrow, short-term predictions and sour-grapes critiques. Loud voices. I don’t want to add to that noise. Instead, I’d like to save my energy for ideas that embrace the bigger picture and point us toward the humanism we urgently need to rediscover.
Erik J. Larson
I’ve launched a consulting and advising company to tackle these and other issues in AI. If you’re interested, give me a shout.
AI doesn't fulfill at all its marketed promises, so far, and that's what critics are usually pointing out. LLMs do not have understanding of the world, they *do not* pass Turing test (just use a common sense test that humans pass with at least 95%, and the best models get 50%, and there you go, spotted the fraud). That being said, there is no doubt that there is pattern recognition intelligence in AI. For example, I use AI music models, and they have impressive mathematical understanding of and proficiency in the most important transformations in music. That's truly valuable. BUT... if one tries to sell the model as an unbelievable songwriter that will replace musicians, I'll say, no, all alone it *sucks*.
The biggest obstacle to achieving technologically excellent AI (mature, most optimally configured LLMs) as fast as possible - is the idea of approaching AGI. I understand that it was created and hyped up cause it sells better (attracts more investment). But it does create unrealistic expectations (missed ROI, more badly implemented tech, etc) and confusion (some vulnerable people fall for the illusion that a chatbot is capable of compassion, kindness, intimacy, etc)… I think we need to urgently educate general public about LLMs via a government led initiatives to give it authority it needs. Public imagination is hooked on AGI (as humans always search for something greater than us - has its pros and cons, this being a con)… we need an AI anti-anthropomorphic dictionary, translating AI hype terms into actual meaning, eg ChatGPT hallucination = error, autonomous = high degree of automation, etc. to be used when reading mainstream media articles.