In my book The Myth of Artificial Intelligence (“the Myth”) I made a case in Part I, “The Simplified World,” that AI luminaries from the get-go made what I call “intelligence errors,” simplified ideas about intelligence that box in the field and produce all sorts of downstream nonsense the rest of us end up debating or assuming must be true. I attributed—somewhat cheekily, ha ha—the first error to Alan Turing himself, explaining how a close look at his published papers and talks reveal a gradual shift from talking about intelligence as something outside mathematical “ingenuity,” as he called it, to something tractable and suitable for the soon to arrive field of AI: intelligence as problem solving. Problem or puzzle solving fit early AI efforts like automating chess play, but as the field expanded into broader and more philosophical questions about commonsense reasoning and understanding, it became a limited view. It seemed, frankly, an error.
I took a bit of heat for including Turing’s later thoughts, though I pointed out that it was a reasonable enough working hypothesis at the outset of a new field, and few would question Turing’s own genius or acumen. It’s just that he seems to have offered up a compute-friendly version of “intelligence” that later AI scientists would take for granted, and that seems to have had certain deleterious effects on the field.
Turing wasn’t really the egregious case, though, as I explained the “superintelligence error” which was to arrive in the 1960s with an odd yet frequently quoted (and now famous) quote from Turing’s colleague and statistician at Bletchley, I.J. “Jack” Good. As I put it:
Jack Good, Turing’s fellow code-breaker, also became fascinated with the idea of smart machines. Turing no doubt primed his colleague’s imagination at Bletchley and afterward, and Good added a sci-fi-like twist to Turing’s ideas about the possibility of human-level intelligence in computers. Good’s idea was simple: if a machine can reach human-level intelligence, it can also surpass mere human thinking. Good thought it obvious that a feedback loop of sorts would enable smart machines to examine and improve themselves, creating even smarter machines, resulting in a runaway “intelligence explosion.” The explosion of intelligence follows because each machine makes a copy of itself that is still smarter—the result is an exponential curve of intelligence in machines that quickly surpass even human geniuses. Good called it ultraintelligence: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Good’s “intelligence explosion” idea was quickly seized upon by sci-fi nerds, who clearly saw it as a way to get superhuman robots and androids like the replicants in Blade Runner, who were at least as intelligent as humans—far more agile and strong—and therefore eminently worthy of the fictional page or screen. Weirdly, bona fide AI scientists also picked up Good’s idea, along with wannabe AI experts who don’t build actual systems but are quite good at prognosticating how smarter systems will come about:
Oxford philosopher Nick Bostrom would return to Good’s theme decades later, with his 2014 best seller Superintelligence: Paths, Dangers, Strategies, making the same case that the achievement of AI would as a consequence usher in greater-than-human intelligence in an escalating process of self-modification. In ominous language, Bostrom echoes Good’s futurism about the arrival of superintelligent machines:
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.
To Bostrom, superintelligence is not speculative or murky at all, but rather like the arrival of nuclear weapons—a fait accompli, and one which has profound and perhaps dire consequences for mankind. The message here is clear: don’t dispute whether superintelligence is coming. Get ready for it.
What’s the mechanism? Machines that design smarter versions of themselves. I addressed this bit of folly as well:
What are we to say about this? The Good-Bostrom argument—the possibility of a superintelligent machine—seems plausible on its face. But unsurprisingly, the mechanism by which “super” intelligence results from a baseline intelligence is never specified. Good and Bostrom seem to take the possibility of superintelligence as obviously plausible and therefore not requiring further explanation. But it does; we do need to understand the “how.”
…
Or suppose we borrow language from the biological world (as AI so often does), and then confidently declare that computational capability doesn’t devolve, it evolves. Looking deeper, we see that this argument is plagued once again by an inadequate and naive view of intelligence. The problem—a glaring omission—is that we have no evidence in the biological world of anything intelligent ever designing a more intelligent version of itself. Humans are intelligent, but in the span of human history we have never constructed more intelligent versions of ourselves.
So, how do the machines do it? How does it happen at all? No one bothers to say. (To be fair, no one bothers to explain the synthetic super-brains of replicants, either.)
Interestingly, superintelligence boosters sometimes drop John Von Neumann’s name as one of the early visionaries who envisioned intelligence explosions. He did make some (rather cryptic) remarks to his Manhattan Project colleague, the mathematician Stanislaw Ulam. But the remarks, as recalled years later by Ulam, were regarding technology generally running amok, in roughly the sense that the world gets too complicated and we crash Wall Street because no one understands the systems anymore. Von Neumann was pointedly sceptical of the “ultraintelligence” claim, and in a series of lectures in 1948 explained how each machine playing “mother” would have to be more complicated than its offspring machine and therefore that the bootstrapping would have to rely on blind chance, not some evolutionary process as dreamt up by Good, Bostrom, and others of their ilk.
Frustratingly, “existential risk” types like Sam Harris—who is not even an AI “guy”—quite cheerily help themselves to the Good-Bostrom thesis and move on to explaining how killer robots, immensely more intelligent than us, could signal our doom. Fun stuff.
Tech Bros Join the Fray
In another and quite related vein, our latest luminaries, like DeepMind’s Demis Hassabis and OpenAI’s much-in-the-news Sam Altman now promulgate a variation of the Good-Bostrom hocus pocus, “solving intelligence” by presumably building a more powerful deep neural network computer (or what have you) and then using this superior intelligence “to solve all our other problems.” (If you think this has a genie in the bottle feel to it, you’re on the right track.)
Hassabis four years ago revealed the much-awaited formula to the Royal British Academy in two helpful steps (this is not a joke): 1. Solve intelligence. 2. Use it to solve everything else. Altman, for his part, explained how OpenAI would first create a “general intelligence,” and then use that to figure out how to make money.
Okaaay. Oh my. The AI publication The Gradient has pointed out a laundry list of problems with the tech-bro thesis. Here I’ll just attempt to summarize my favorites. The first is just that while Large Language Models (LLMs) and ChatGPT have at least temporarily titillated natural language processing researchers, the field overall continues to suffer from seemingly intractable problems. Robotics is rarely in the news because the systems still can’t wriggle like worms, or walk better than toddlers. Decades of hyper funding and intense R&D efforts have barely budged this needle.
Self-driving cars, too, are stalled (no pun), in large part because the AI tech we use—neural network brains trained on scores of game-like driving exersizes, along with LIDAR and other sensors to detect and classify outside objects—can’t handle a range of edge cases, from partially occluded road signs to snow to leaves blowing across the street. Or Trick-or-Treaters, who don’t look like ordinary pedestrians (parents, beware). “Solving intelligence” may take decades, if it ever happens, and it seems we should be putting some of our own smarts into solving thorny problems like climate change, poverty, energy consumption and all the rest without waiting for the computer minds to do it all. We might want to get on with actually making the world a better place now, in other words. Objections?
There are even worse problems and confusions. Most problems we care about—take the flatlining of labor productivity, say—involve complex human and non-human factors and aren’t obvious candidates for sci-fi like super brains to “figure out.” We already have lots of human geniuses, and they often do a great job of mucking things up. Why would even smarter computer geniuses be any different? We don’t really need more intelligence per se, we need the right kinds of intelligence and cooperation among relevant groups and organizations with an eye to human-based outcomes. The idea that a brain-in-the-vat will cut through all the red tape and solve complex geopolitical or domestic problems is just stupid.
Bottlenecks and Trade-offs
What else? Most any complex problem that has us flummoxed today involves cumbersome activities in the physical world, like pouring cement or constructing houses, along with forecasting and “digital” aspects, and as economists have known for decades, it’s the weakest link that limits the dynamics of the overall system. We can add brilliant AI, but someone still has to do all that construction, and there are inherent limitations in such cases (the example here refers to GDP growth). The brilliant AI may be sucking up dollars we need to buy cobalt, or lumber. Complex problems are full of bottlenecks and trade-offs, in other words, which will invariably require human judgement and make AI a part of the system rather than a superordinate “solver” of it. The Gradient essay helpfully quotes
, who poses the problem with the tech bro thesis brilliantly. Here he gets straight to the heart of the matter with invention itself:Invention has started to resemble a class project where each student is responsible for a different part of the project and the teacher won’t let anyone leave until everyone is done... if we cannot automate everything, then the results are quite different. We don’t get acceleration at merely a slower rate—we get no acceleration at all.
Finally—my personal favorite—much of our knowledge is tacit, rather than programmable, in the first place. Throwing into the mix a big computing brain assumes all the relevant knowledge can be computed in the first place, a proposition that is not only absurd on its face but would be—and always is—downright foolish when implemented on real world problems. The economist Friedrich Hayek pointed this out back in 1945: "To assume all the knowledge to be given to a single mind... is to assume the problem away and to disregard everything that is important and significant in the real world." Philosopher and biologist Michael Polanyi made similar observations, as The Gradient essay notes: “we know more than we can tell.” Add to all this tacit knowledge that we don’t know how to encode in computers in the first place, all the social and value-based aspects of complex human problems.
Tech bros, c’mon. Get the self-driving cars working before lecturing the world about solving all its problems.
Erik J. Larson
Nice post. I would also add that this “Tech nonsense” is based on at least two premises. First, that the mind is supposed to be computational, ignoring all the arguments to the contrary. Second, it always implicitly posits the mind-brain identity as an indisputable given, something however that, no matter what neuroscientists and philosophers of mind tell us, is far from obvious.
"Tech bros, c’mon. Get the self-driving cars working before lecturing the world about solving all its problems."
Yeah!
Promises, forecasts, and models are not reality.
Where's the beef, boys?