AI's Forgotten Counterculture
Questioning the hype about AI once felt culturally important. Today, it seems like ankle biting.
Hi everyone,
Thanks so much for the encouraging response to last week’s piece on Large Language Models and “Good Old Fashioned AI.” When I started this, I was hoping to bring people together to discuss AI and questions of humanism (“bring together” is what Colligo means, in Latin) and witnessing the reception of Colligo since the September 2023 launch I’ve been surprised and encouraged to keep going. Thank you!
Missing: The Counterculture
This week I want to return to a piece I wrote for The Atlantic back in May 2015, “Questioning the Hype About Artificial Intelligence.” The article highlights what I refer to as a “growing counter-culture movement” of writers, theorists, and culture critics like Nick Carr, Jaron Lanier, Andrew Keen, and Matt Crawford as well as AI maestros like Berkeley’s Michael Jordan, widely considered one of the top machine learning specialists in the world, all skeptical of a coming superintelligence or Singularity. When I wrote the article nine years ago, the assortment of AGI sceptics did seem to be self-organizing into a legit counterculture movement, openly challenging the dominant Silicon Valley/Big Tech narrative proclaiming SkyNet futures or techno-utopias.
As I write this today, I’m painfully aware that it never really happened. Like a balloon expanding and pushing ink marks on its surface further apart, today’s world has plenty of skeptics—in the field of AI and without—but nothing like a “counterculture.” Critical voices have proliferated, but they seem to occupy their own islands in the vastness of the web, and much of the ado is—compared to the full-throated defense of humanism a decade ago—about nothing much.
Much of what passes today for a “counterculture” amounts to carping about foundational models like GPT and systems like ChatGPT, pointing out ongoing problems like bias or explainability, or the occasionally comic but always troubling hallucination. Since I wrote “Questioning the Hype,” in other words, the ground has shifted, the landscape has changed. We see lots of opining about AI, but no grand narratives about humanism. AI—and talk about a coming AGI—has become a commonplace. Artificial Intelligence is everywhere, like discussing kitchen appliances or battery powered cars.
What a difference less than a decade makes. Back in 2015 I could write about virtual reality pioneer Jaron Lanier and writer Nick Carr with bold strokes:
Lanier, Carr, and a growing counter-culture movement of writers and technologists, skeptical of what they see as a mythology about artificial intelligence that’s akin to a new and false religion, point out the virtue of human intelligence and the importance of a human-centered view of our future.
And today? Lanier hasn’t written a book in years, and Carr too has somewhat disappeared (in both cases, much to my chagrin).
, a former professor of cognitive science at New York University, energetically attacks hype and misinformation about Large Language Models (LLMs) and data-driven approaches to AI on his Substack Marcus on AI and elsewhere, but Marcus seems content to blurb ChatGPT failures and offer breezy insider comments about companies and people, like OpenAI’s ousted (then rehired) Sam Altman. This sort of in-the-news-cycle writing hardly provides an alternative to an AI-driven culture—it’s part of the AI-driven culture.Marcus argues that Large Language Models (LLMs) are at best incomplete as a path to AGI. They need to be integrated into larger, hybrid systems, a view that I more or less share. But my point in The Atlantic was that skeptics were suggesting AI—any type of AI that becomes dominant—represents a potential threat to humanism, to our future as humans. This is the question of “mind versus machine”:
Humanists have a seemingly simple point to make, but combating advances in technology with appeals to human value is an old stratagem, and history hasn’t treated it kindly. Yet the modern counter-cultural movement seems different, somehow. For one, the artificial intelligence folks have reached a kind of narrative point-of-no-return with their ideas of a singularity: The idea that smart machines are taking over is sexy and conspiratorial, but most people understand the differences between people—our minds, or souls—and the cold logic of the machines we build. The modern paradox remains: Even as our technology represents a crowning enlightenment of human innovation, our narratives about the modern world increasingly make no room for us. Consciousness, as Lanier puts it provocatively, is attempting to will itself out of existence. But how can that succeed? And to the paradox: How can we both brilliantly innovate, and become unimportant, ultimately slinking away from the future, ceding it to the machine’s we’ve built?
All this represents a profound challenge with more than a technological answer. When machine intelligence arrives (if it does), what will have happened? Computers with personalities would be akin to the discovery of alien life in other galaxies. Yet humanists argue that mindless automation will continue to get more powerful, and more pervasive, but fundamentally the world remains ours to create. Humanists thus refocus the discussion on real consequences of unbridled automation, and the diminishment of human excellence resulting from over excitement about machines.
Consciousness “willing itself out of existence” would seem a bit overwrought today, if only because big picture thinking about the culture of data and AI is less popular. And “unbridled automation” and the “diminishment of human excellence resulting from over excitement about machines”? These are classic big ideas in the vein of E.M. Forster’s inimitable early 20th century novella The Machine Stops, or Capek’s R.U.R, a play about slave androids running amok and a cautionary tale about scientific hubris and the value of freedom and consciousness. Today we can look forward to an endless compendium of anecdotes about weird responses from ChatGPT, old saws about “garbage in, garbage out” and other inanities.
Even the delightful nay sayer Emily Bender, the outspoken linguist at the University of Washington who famously derided LLMs as “stochastic parrots” in the 2021 Association for Computing Machinery (ACM) conference proceedings (New York Magazine showcased her last year in an article titled “You Are Not a Parrot”) doesn’t seem worried about mind versus machine or artificial intelligence versus human-centered culture. Bender’s point is that LLMs are, well, stochastic, and so don’t explicitly represent grammar and other features of language, like semantics (what words mean). The debate about the future of AI today is mostly just a debate about the types of systems we’ll adopt. Even the critics are machine-lovers.
I was contacted recently by the director of a new think tank in Austin, Texas, purporting to address key questions about “the future of AI.” He wanted me to present the core ideas of my book, The Myth of Artificial Intelligence, in a roundtable format with other members of the Institute. The discussion was pleasant enough, but it struck me that most of it was about what LLMs mean, what they do and maybe can’t do, and where the tech might “go.” Unlike discussion about, say, nuclear energy or weapons, no one at the table—academics mostly, many from my alma mater The University of Texas at Austin—came even close to suggesting we shouldn’t use LLMs, or that they might represent some sort of event horizon where AI becomes a permanent ubiquitous feature of society. The acceptance of AI is at an all-time high. (When—and I hope this doesn’t make me a hypocrite—I started looking for start-up opportunities late last year, it’s no wonder I fixed on opportunistic uses of LLMs. If you’re doing something with AI today, what else would you use? Case in point.)
I happen to know the director and know that his background is in philosophy and especially the thinking of the philosopher Leo Strauss—about as far from a technical field like AI as one can get. Huh? Even a traditional Straussian’s bread is buttered with LLMs these days. Giving a talk last year to the philosophy department at the University of Florida, I was struck also by the focus on LLMs, and foundational models generally. Ironically (and somewhat comically) the philosophers aren’t much interested in big picture theories about minds and machines anymore. They want to get a toe-hold on some downstream issue like “explainability,” a bugbear of black box systems like LLMs, or “safe AI” or the threat of bias, a thorny problem involving the composition of the text used in training. A few of the old-timers at the Florida talk exhumed discussion of transformational grammars ala Noam Chomsky—Chomsky was wrong—but to me, everyone in the room seemed content to assume LLMs belonged in philosophy class and that they were the obvious bridge to AGI. Original thinking? Nah. More like kowtowing to Big Tech and data-driven AI.
The counterculture that I thought I saw coming in 2015 never arrived. The truly original voices of ten or fifteen years ago are mostly silent. Nick Carr’s 2010 (updated 2020) The Shallows: What the Internet is Doing to Our Brains earned a finalist spot for the Pulitzer, and Wired Magazine named his 2016 Utopia is Creepy “one of its all-time favorite books.” These are wonderful cautionary tales about technology and especially AI, and Jaron Lanier’s 2010 You Are Not a Gadget likewise repays careful reading today. I was thrilled to go on Andrew Keen’s podcast last year, Keen On, because I knew him from his earlier books like The Cult of the Amateur and Digital Vertigo. But none of these iconoclasts can be described as part of a “growing counterculture” anymore, as I wrote back in 2015. Indeed, to my knowledge none of these iconoclasts have written anything extensive or book-length about modern AI, with its language and other foundational models. Perhaps they don’t feel discouraged, but in some real sense I do. The world of human possibility has shrunk.
I’m not suggesting we roll back the clock. I’m suggesting we try in this era as in previous ones to preserve and encourage the best of our embattled human-centric culture. Today, it seems, we’re too blithe about technology, too willing to ignore or side-step the old debates, too phlegmatic about the role of thinking (and doing) in modern life. Indeed, we seem content—as the popularity of LLMs suggests—with reading more and more writing that’s not even human. It’s generated by—what else?—generative AIs like ChatGPT. We may still “question the hype,” but increasingly we do nothing about it, and our counterculture never arrived.
You can read the full article in The Atlantic here.
Erik J. Larson
Plese use correct spelling: Cheat-GPT…
There are likely several reasons why you found academic philosophers to be thusly derelict. Here are a few.
(1) Academic philosophers who are interested in philosophizing about current issues (such as LLMs) are pretty much all thinking about current issues in terms of race and gender. (Why they're doing so is another story.) Hence the focus on bias.
(2) Academic philosophers, along with others in the academy who teach writing-intensive courses, are scrambling to figure out how to deal with two stubborn facts: that LLMs exist and that if they exist, students will use them. As far as I can tell, resigned acceptance is the most common attitude. Hence the felt futility of ethico-cultural arguments against LLMs.
(3) Since at least the early '70s, there's been an entire industry in academic philosophy that has been churning out papers whose arguments have their dialectical basis in the arguments and counterarguments of Thomas Nagel, David Chalmers, John Searle, Frank Jackson, Daniel Dennett, and the like. The topic has been, very broadly, the metaphysics of mindedness vs. unmindedness. These industries have a way of just . . . petering out. Academic philosophers might just be tired of talking about it. Hence the lack of enthusiasm about the big questions you gesture toward.