Sentient AI is a Phlegm Theory
What Mission Impossible: Dead Reckoning, Part One tells us about AI mythology
Hi all,
Thank you for engaging with these ideas. This post is half-movie review (a terrible one at that), and half analysis of why sentient superintelligence is a bad plot device, and silly science, or non-science. I briefly run through some of the consciousness research. I hope you enjoy.
Sentient AI in the Movies
Tom Cruise—I mean Ethan Hunt—likes to run. He likes to ride motorcycles. He’s always down to speed race cars. He’s all in on leaping out of windows, base jumping, and hand to hand combat. And he always wins the heart of the beautiful girl. You’ll see all this in Mission Impossible: Dead Reckoning, Part One, the latest in the long-running film series, and the film series’s foray into “existential risk” thinking about superintelligent AI. “The Entity,” as it’s called, is the nemesis that Hunt and other members of the mum’s-the-word spy organization IMF (Impossible Mission Force) must confront, against all odds, as it’s smarter than any human, and learning constantly.
The Entity cleverly captures personal information about Hunt’s steadfast sidekick, Benji Dunn (played by English actor and comedian Simon Pegg) by planting a fake nuclear weapon in airport baggage handling, and requiring Dunn to answer riddles and questions about his personal life before defusing it. There’s no bomb. It’s just a ploy to learn more about Dunn. We’re dealing here with a vastly superior, ultra-witty Superintelligent AI, bent on taking over the world by infiltrating the world’s defense systems. How do we stop it? Who knows? It’s, well, smarter than us.
The arch-villains in cinema’s other long-running action series, the Bond movies—Dr. No was released in 1962, with Sean Connery debuting as 007—are quirky geniuses with visions of world domination, too. They chuckle at evil, elegantly stroke white cats in posh digs, and watch fighting fish duel it out in fish bowls for amusement. They’re human, if delusional and evil, and they are typically done in by their megalomania and mad mental blind spots. Bond seems to outwit them by having—let’s face it—more common sense. He’s just cooler, basically. The movies seem to warn us that being too smart and having grand visions of taking over the world must end in ruin. The AIs haven’t learned this lesson yet, apparently.
Back to Mission Impossible. “The Entity” is a mashup of villianery, and at times is a shadow unseen, like a looming Poltergeist, soon to terrify by entering the physical world. At other times it makes strange scary noises like a demonic force, something out of The Ring, as it computes the best way to eliminate humankind. It’s a ghost in the machine, and a horror movie monster, to boot. As a plot device, it’s annoying. It’s a deux ex machina that serves mainly to keep Cruise running, shooting, riding, driving and loving. And that’s the problem. The most successful movies in the “AI” genre, loosely speaking, embody their AIs. Blade Runner had replicants. The Terminator had Schwarzenegger’s abs and pipes—talk about “embodied”! The body-less Entity in the latest Mission Impossible movie, we’re told, is running the whole show. Fine. Frankly, it’s boring. The movie is only saved by the human action, where derring-do and heroism keep the film emotional and interesting enough. The Entity is a non-Entity.
The film makers handle this ho hum by marrying The Entity to a mercurial human personage known as “Gabriel,” who we learn once killed Marie, one of Hunt’s early crushes, and has since haunted Hunt’s dreams. A shadowy, steely gazed evil doer, Gabriel is a real person—he’s the human equivalent of the androids and serves their function on screen, but he’s an actual dude. His dudeness allows The Entity to embody, so it can get a better seat for the show, the stream of turbocharged events that make the actual movie a movie. Superintelligence, turns out, is boring. (Like the critics say, then unplug it.) Let’s move on.
In an early scene, National Intelligence director Denlinger, in a top secret meeting announces that the AI experiment, begun by US intelligence agencies and tested on a Russian submarine (no problem there), had rewritten its code, started learning on its own, and had since become “sentient.” It’s a creature. It’s alive. It now wants to take over the world—who can say why, exactly? We’re pigeons outside its window. We’re the dumber entity. It does what it does for reasons we can’t fathom.
Phlegm Theories
Ages ago, I spent a year studying at The University of Arizona at Tucson, where a young hipster called David Chalmers had arrived as a new professor. I wanted to study the philosophy of mind and consciousness, of course. I was on an exchange with my program at UT Austin. Chalmers has since become famous for making a simple but powerful point: consciousness isn’t reducible to whatever the brain is doing, because the “functions and structures” of scientific explanation don’t tell you what a red wine tastes like on the tongue, or an ice cube. A pain in the foot. Your mother in law. The philosophical “proof” of what’s called qualitative or sentient or conscious experience (I’m running these together for our purposes here—in philosophy we would spend another four thousand hours teasing them apart) gets complicated, involving something called modal logic, which deals with possibility and necessity. The upshot of all the math logic scribbles is that a philosophical zombie—a creature that acts exactly like any other creature with consciousness but has none—is logically possible, and therefore there can’t be a reduction of the mind to the functions and structures of the brain. Consciousness is a separate fact about the world. It isn’t contained in the facts about the brain.
At the time (this was, to date myself, 1998), philosophy departments were drawing lines in the sand between incompatible stances on consciousness: it’s just an illusion and the brain and science is it, it’s something “epiphenomenal,” or emergent from the brain but actually different (different property), or the traditional Cartesian view: we have matter (the brain), and we have mind (where consciousness is to be understood and were it resides). It was fun stuff. The debate drew in sharp minds not just from philosophy but from cognate fields like cognitive science, neuroscience, and psychology (a famous consciousness researcher, Stuart Hameroff, is an anesthesiologist. Another, Roger Penrose, a mathematical physicist.)
I left philosophy long ago and went into computer science. When I returned to the old debates about consciousness, though, I discovered to my amusement that they’d hardly changed. There were additional theories over the years, integrated information theory (IIT), global neuronal workspace (GNW), and others. They all shared the dubious honor of not really answering Chalmer’s pesky objection, that consciousness was provably a separate fact (the “proof” involves the modal logic, which of course can itself by disputed). Chalmer’s called the two problems—those of how the brain works, and those of what the mind is—the “easy” and “hard” problems of consciousness. Easy problems allow you to measure and experiment, like say a phenomenon like attention. What do we pay attention to, and how does that mechanism work? The hard problems are about feels or qualia, how things seem to us internally—in our minds. It’s hard to throw out the evidence that we’re conscious, because we are conscious. That’s the whole problem. We’re not automatons, the “lights are on” inside. But how?
Fast forward to The Entity. Or rather, to the field of AI as it progressed into data science and then became the cornerstone of digital technology and the world. This is where it gets messy. How is The Entity “sentient”? The lights are on inside. How can that be? Computers are basically calculators—they literally add binary digits. Great for spreadsheets, word processors, cloning human speech (ChatGPT), classifying images and flying drones (not so much driving cars yet). It’s a powerful set of tools. But how can gargantuan adding machines “come alive”? And here’s where the muddle of consciousness research in philosophy helps all those billionaire entrepreneurs bent on having a worldview. It’s a phlegm theory.
Phlegm. Ahem.
In 2016, Princeton psychologist and neuroscientist Michael Graziano wrote in The Atlantic a fantastic article about the state of consciousness research, titled “Most Popular Theories of Consciousness Are Worse Than Wrong: They play to our intuitions, but don’t actually explain anything.” He had a great hook: phlegm. “Phlegm theories,” he explained, “resonate[] seductively with our intuitions and biases, but [don’t] explain anything.” The oozy reference to phlegm is from the middle ages:
According to medieval medicine, laziness is caused by a build-up of phlegm in the body. The reason? Phlegm is a viscous substance. Its oozing motion is analogous to a sluggish disposition.
Garziano’s point—quite a downer—is that our theories of conscious today are still phlegm theories. We really have no clue how it’s connected to our brains, or more specifically what gives rise to it, under what conditions, and how it comes to be subjective rather than another objective fact in science. How can we have subjective facts in a mature science, or technoscience? What do you mean, subjective? I think it was the great German thinker Gottfried Leibniz who once remarked that if we could shrink a person to the size of a—biological cell?—and have him or her walk around in the brain, they wouldn’t find any consciousness. Leibniz was an Enlightenment figure, obsessed with God and calculus—he co-invented the calculus with Isaac Newton—but he made a beguiling and simple point. Consciousness isn’t a “function or structure.” It’s not something you can write down and capture. You can’t see it. You experience it. What do we do, then? Simple. Throw in some phlegm.
Ahh, phlegm. This brings me to Bostrom. Superintelligence boosters like Nick Bostrom, who wrote in 2014 the now classic (but very flawed) Superintelligence: Paths, Dangers, Strategies, like to play on one hand a pure “science” view of AI that excludes talk of sentience or consciousness. We hear from Brian Christian about “the alignment problem,” where a possibly mindless but very clever AI learns independently to the point where it’s “out of alignment” with its human creators and the rest of us hapless fools, who will no doubt soon be in its crosshairs. It might just be math, but it’s dangerous math. Whether it’s truly sentient or not, it’s too darn smart to ignore.
On the other hand, playing that hand is not much fun—no one likes math but math geeks—and since the 1960s masterpiece, 2001 A Space Odyssey Kubrick, introducing us to the inimitable HAL 9000 (“I’m sorry Dave, I’m afraid I just can’t do that…”), on up through the Terminators and Replicants and SkyNets and all the rest, sentience is what’s sexy, and complicated enough intelligent machinery ought to have it. You can’t be evil if you don’t have a mind—no one thinks a fly is evil for landing on your waffle. Flies are the sorts of creatures that land on waffles. That’s it. So sentient superintelligence is the real prize here.
If pressed, the serious scientific types (Bostrom is actually a philosopher) can fall back on the argument that sentience isn’t required to create an existential risk. (Berkeley computer scientist Stewart Russell also plays the “on one hand, on the other hand” card, as I’ve explained in my book at some length. Maybe it’s a mindless program that still learns to kill us. Maybe it’s sentient. Who can say?) This practiced ambivalence about the question of sentience is the superintelligence enthusiast’s version of ducking and weaving in boxing. It keeps the discussion going, and manages to slip direct questions intended to clarify. It sidesteps well meaning attempts to pin down exactly what’s being claimed. It’s good work, if you can get it. Ducking and weaving is a good strategy.
Phlegm theories. They’ve found quite a home in imaginings and brain droppings about future AI. No one bothers to wonder whether a gadget that runs off of electricity and adds binary numbers incomprehensibly fast might become like an organism with sentience. No one has to bother with it. Why? Because the science of consciousness is mostly phlegm theories even about human minds, about us. If we don’t have a clue about consciousness, everything can be consciousness. Why not?
In other words, The Entity IS sentient. Get over it.
So. If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you. There are no insights into sentient AI. It’s reasonably clever writing about what-we-guess would be a super smart impersonal force. Like a demonic force in a horror movie, but built of electronics in Silicon Valley. On the other hand, if you like to watch humans be heroic and overcome seemingly impossible odds, this just might be your film. And if you like to watch Tom Cruise run, jump, ride, drive, dive, fight and get the girl, it’s definitely for you.
Ahem.
Erik J. Larson
Hi Erik. Nice article. I have debated with David Chalmers specifically on my reductio ad absurdum that demonstrates computation cannot generate consciousness *unless* one subscribes to a particularly vicious form of panpsychism. For a summary, see. Bishop, J.M., Cf. "Artificial Intelligence is stupid ..": <https://lnkd.in/e-MxHXYq>.
For an early rejoinder to DC. see Bishop, J.M., (2002), Counterfactuals Can’t Count: a rejoinder to David Chalmers, Consciousness & Cognition: 11(4), pp. 642-652.
.. or better still, check out the later paper, Bishop, J.M. (2009), Why robots can’t feel pain, Mind and Machines: 19(4), pp. 507-516. <https://tinyurl.com/3j28euxx>
"consciousness isn’t reducible to whatever the brain is doing, because the “functions and structures” of scientific explanation don’t tell you what a red wine tastes like on the tongue, or an ice cube" — I think consciousness *is* a brain function (and a pretty useful one at that). The phrase "don’t tell you what a red wine tastes" is like asking "do we have free will. The question itself is misleading and — following Uncle Ludwig — we should leave the dead-end room the way we came in.
In the case of free will: We do not *have* free will. We *are* free will (because we are not fully predictable). The closer a potential course of action is to multiple 'equivalent' choices, the more 'close options' we get in our attention/consciousness function, which we know we have, the more we experience choice and freedom. What in the end triggers one or the other of these 'close options' is unpredictable. We *label* that experience 'free will'. Free will is a 'label'.
These discussions could do with a heavy dose of later Wittgenstein (just to keep us sane).