The Perils of Mythical AI
No, the machines aren't coming alive, and believing they will is culturally dangerous
A piece on the existential risk brouhaha. I hope you enjoy.
Erik J. Larson
The field of Artificial Intelligence (AI) launched auspiciously at the now-famous Dartmouth Conference in 1956. From the outset, the pioneers of AI—then mathematicians, economists, or from other academic fields—viewed the new field as an opportunity to finally make good on a longstanding cultural and scientific dream: the creation of a humanly intelligent machine. The fledgling field and its hopeful leaders straddled science and myth, and theories about AI printed in otherwise rigorous scientific journals anthropomorphized it without apology. For example, statistical systems like those using neural networks were called learning systems. The success of such systems spawned an entire subdiscipline, machine learning. Computer storage was dubbed memory. Chess playing systems played the game of chess, somehow without cognizance of playing a game or even interacting with a human player at all. AI was a field dedicated to proving the possibility of a thinking machine. AI mythology assumed this possibility was to some degree already real, and the victory of truly intelligent machinery was simply a matter of time. By the 1960s, noted AI luminary Marvin Minsky of MIT declared that machines capable of doing anything a person could do were but a generation away (tellingly, Minsky served as a consultant on Stanley Kubrick’s science fiction movie 2001: A Space Odyssey, which featured a super smart AI, the inimitable HAL 9000).
The Birth of the Modern Myth
If “mind” words like learning and thinking marked a fledgling AI from the get-go, entire mythological narratives dominated speculation about future AIs. AI turns like a sunflower towards all things futuristic, as it represents a yet unfilled promise of a fantastic outcome. But whereas predicting the future in other fields retains a patina of humility (predicting the future, after all, is impossible), futurism about AI long eschewed such constraints—or never had them. Even by the 1960s Alan Turing’s statistician, I.J. Good, pondered the emergence of an “ultra intelligent” AI system, which first reaches human levels of intelligence and then by designing smarter versions of itself quickly eclipses paltry human smarts. Good’s ultra intelligence futurism became the darling of science fiction writers and indeed mainstream practitioners, leading to the Singularity movement sparked by computer scientist turned sci-fi writer Verner Vinge and indefatigably promoted by futurists like Ray Kurzweil, Hans Moravec, and many others.
The “Singularity,” properly a term in mathematics, quickly supercharged mythology about AI. Kurzweil, now engineering head at Google, did much to popularize the notion, arguing that machine intelligence grows exponentially and so will overtake human smarts more quickly than we’re likely to guess or predict. The crossover point—Kurzweil thinks it’s in the 2040s—where machines surpass humans represents the Singularity, the point of no return, like passing through an event horizon, where the world ceases to make sense to us because we are no longer the smartest “apex” brains on the planet. Self-styled “Singularitarians” sprouted in Silicon Valley and soon formed institutes like the Machine Intelligence Research Institute (MIRI), dedicated to addressing the myriad and seemingly intractable challenges of the Singularity.
In 2014, Oxford philosopher Nick Bostrom adapted the Singularity hypothesis into (as he thought) a slightly less mystical thesis about “superintelligence.” This was Good all over again. AI systems, said Bostrom, suffer setbacks and funding cuts, but over time they get more intelligent, and can solve a wider class of problems that were once the sole purview of human beings. Superintelligence, in other words, is a technological fait accompli, as the improvement of AI over the years signals its eventual realization. But superintelligent systems represent a grave and even existential threat to humanity because, simply put, they can outsmart us. Their superior mechanical brains may also harbor dark (though perhaps innocent) purposes and motivations threatening our very existence, like solving climate change by exterminating us. Who can say? The systems will be too clever to negotiate with, or for us to pull the plug.
Bostrom’s AIs seem equipped with not only superintelligence but all the accoutrements of mind, including even consciousness and emotions. Though Bostrom has been cagier about truly conscious machines, others aren’t. Kurzweil, for instance, in his 1990s book The Age of Spiritual Machines argued that all aspects of humans, even our spirituality, will be surpassed by future AI. Philosophers like Sam Harris, a perennial nail biter about AI in spite of having no experience in the field, also espouse the view that machines getting too smart might turn on us. Indeed, an entire movement is now afoot, fretting about the existential risk of runaway machines with minds. How did magical thinking infect an ostensibly scientific discipline?
Myth Trumps Evidence
In spite of the almost religious fervor about the mental powers of AI, we have at best inconclusive evidence that AI systems will get smart like humans—the so-called general intelligence attributed to us has so far proven entirely elusive for machines. It’s an open question whether future AI can really achieve general, or human-level, intelligence. To date, we have evidence that AI systems can solve an increasing range of tasks, but they are narrowly defined, like playing a game or recognizing a face. Even the current obsession, large language models and their applications like ChatGPT, while certainly impressive, still generate word sequences culled from human-language on the web. While their output often seems generally intelligent, the systems can’t be used outside the web. They can’t, for instance, drive a car, or operate in the natural world. They’re confined to the cyber world. Are we really building something generally intelligent, like a human? It seems no.
Beyond intelligence, talk of motivations and desires in future AI gets even murkier, and further from the usual strictures of science and evidence. While some existential risk thinkers view talk about intentions and motivations as metaphor, others flatly don’t (like Kurzweil). And even the ostensibly more careful contributors, like the University of Berkeley’s Stewart Russell or OpenAI’s Paul Christiano, who worry about the “alignment’ between human and machine, seem to perpetually ping pong between talk of actual motivations and purposes and the usual tech-speak about encoding objectives and goals into computer systems—the latter something entirely quotidian and commonplace.
In truth, AI is more a cultural phenomenon than a field, and its main maneuver is to fire the human imagination with myth while also dousing these flames whenever scientific respectability is required. No wonder the public is confused—everyone is. Even respected AI scientists and computer scientists play this game, wittingly or not, exposing the depth of mythology and magical thinking which has always been part of the story of AI.
Ghost in the Machine
Beyond intelligence concerns and speculation about motives and purposes, there’s the beguiling possibility of machine consciousness. The lights are on inside, so to speak, and the systems feel pain and pleasure and are deserving of and indeed must be given humane and ethical treatment by humans (assuming we are not already enslaved by them). AI futurists are divided on this issue—Kurzweil a resounding yes, Bostrom a somewhat waffling no—but it attracts an astonishingly sizeable cohort nonetheless. Clearly not empirical, the view would get short shrift in other scientific disciplines like physics or chemistry. Still, it’s beguiled generations of AI enthusiasts, practitioners, and the public alike. Mythology about AI seems to be intrinsic to the field—which is to say AI is not just a field. It free floats over boring evidentiary considerations. And like all great mythologies, it doesn’t really matter in the end if it’s really true. It’s the thought—or rather the feeling—that counts.
The Downside of Magical AI
One might be forgiven for dismissing the mythical or magical elements of AI as irrelevant to the day-to-day concerns of AI scientists, concerned as they are with building systems that solve actual problems. Yet magical thinking about AI is bound to have downstream deleterious consequences for sober assessments of today’s achievements and tomorrow’s outcomes. For one, “Mythical AI” insists that the broad success of the field is simply a matter of time. This is historical determinism, not discovery and innovation. Scientific discovery and technical invention and innovation presuppose rather the opposite: that scientists must strive to understand and solve problems currently still out of reach. How this happens, and on which problems, is profoundly non-deterministic, and involves a recipe of factors from more funding to blind luck. Mythical AI wishes all this away, telling a just-so story about the future of the field. Perhaps ironically, Mythical AI vitiates the very human innovative quest which might succeed one day in bringing it about.
Mythical AI also imperils realistic assessments of system performance. For instance, nothing is gained by believing that, say, ChatGPT is now generally intelligent and mindful. Mythologizing it will not further our understanding of the mistakes it does make, and why. Such post-mortem analyses of complex systems like ChatGPT require a full understanding of the nuts and bolts of the system as a “boring” statistical system. Claiming it’s also truly intelligent is an annoying gloss on a scientific problem, at best.
Crying Wolf
Further, misplaced confidence in the eventual appearance of AGI or superintelligence pulls scientists, the media and public into unnecessary and fear mongering discussions. While it may be helpful and even prudent to sign petitions and draft open letters on very real threats like cyber warfare, it’s a waste of time and resources to perform such rituals about future AI systems which intentionally run amok. An open letter to pause for six months work on “giant AI models” this year has received to date over 33,000 signatures, including Elon Musk, Apple pioneer Steve Wozniak, historian Yuval Harari, and Stability AI CEO Emad Mostaque. The letter is “dripping with AI hype,” as University of Washington computer scientist Emily Bender charged, and appropriately enough indulges mythology in the confusing context of a scientifically serious discussion: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” Well, should we? Are you still beating your pet?
Remove mythology, and discussions about the trajectory and impact of AI now and tomorrow regains rationality and commonsense. The human designers and operators of such systems—not AI which magically makes free choices—should be the focus and targets of blame, as it will be their actions or inactions that lead to “runaway AI” scenarios anyway. Humans—bad actors—are the real existential threats, not the systems. Gobble gook about strong willed and dashingly brilliant computer systems can’t illuminate very real concerns, and it pulls bright minds into mythological sidebars without advancing real social concerns and goals. How much ink has been spilled this year over the question of developing AI minds?
IMAX AI
Mythologizing AI has a rich pedigree. Even Alan Turing, who penned perhaps the most famous paper on AI ever, “Computing Machinery and Intelligence” in 1950 (predating the field and the term), indulged a Janus-faced myth and science. He wrote somewhat tongue in cheek, to be sure, and no doubt recognized the element of play and fun in what was to become AI which distinguished the science from particle accelerators and petri dishes. If Turing were alive today he might also recognize the need for more sober scientific discourse. Let’s be honest: ascribing to our technology causal and mental powers it doesn’t have is junk science, which imperils clear thinking and innovation. We seem perpetually obsessed with minimizing the powers of our own minds, while boosting beyond reason the presumed powers of our technology. We will need our own minds to innovate; the gambit is ultimately self-defeating and stupid. In a world already awash in misinformation and fake news, blurring fact and fiction in science seems particularly unwise. Mythical AI has a place as science fiction, and it’s in cinema and novels and other facets of culture where we can and should enjoy it.
Nice work, again. Happy to see some content on AI that speaks frankly to at least one side of the topic -- the oft-apocalyptic handwringing about Terminator-esque flights of fancy regarding the potential future of such systems.
We need "futurists" like we need witch doctors and Ouija boards.
According to past "futurists," right now we're all driving flying cars, and vacationing on Mars.
Thanks for the quick shot of reality.