Five Reasons We're Heading for an AI Winter
Mainstream AI is dominated by adding numbers to versions, from GPT 4 to GPT 5. In the meantime, fundamental progress is at a standstill.
Hi everyone,
Quick post here that’s come out of research for the book, but I hope well-worth a read. Here’s why: we’re in a hype cycle yet again. Here are five solid reasons why you can take that to the bank.
Self-driving cars are nowhere close to “Level 5” fully autonomous driving. Rewind to 2016, and that was around the corner. Public trust and safety issues, legislative hurdles, technological barriers, and major industry setbacks all bedevil the effort.
When passengers and pedestrians die on the road and there’s a mass of sensors and largely undecipherable neural network inferences to point the finger at, we could have predicted the stall. But here’s the real scoop. Forget about safety, regulations, public trust, dried up funding and all the rest. No one has a clue how to propel two tons of metal and hard materials down a road at driving speeds without running afoul of the AI—the core intelligence driving the car. The problems are legion, and they all point directly at general cognitive abilities, as we have from our motor and prefrontal cortex when forming a model of our bodies and the environment our bodies (and our vehicles) are moving in. Here’s a quick list: Sensor reliability, complex environments with “long tail” objects and events (otherwise known as driving), unrealistic and dangerous limits to real-time learning and decision making by the core AI, and marked inability to deal with…. other drivers and their decisions on the road.
Put it more simply: if AI were headed toward human-level intelligence, self-driving cars would be making steady progress toward Level 5. Instead, the major players have largely given up. Ford, Volkswagen, Uber and Lyft have all bailed before the news gets worse. Tesla and Musk have lately gone silent, and whenever Musk has tweeted recently about autonomous driving it’s now cautionary rather than “summoning the devil” with unheard of powers. The script has flipped. AI winter reason #1.
Silicon Valley has gone all in on language models, where we’re already running out of useful data and maxing out compute resources needed for the dubious pursuit of finding flexible intelligence at the end of training models with tens of thousands of computer chips and tens of trillions of hyper parameters. As many critics and even boosters have acknowledged, the technology is inherently “epistemically unstable,” as the generative approach undergirding it cannot tell the difference between high-probability words amounting to bullshit sentences, and truth. Sam Altman himself—somehow now the voice of “AI”—has tacitly conceded limitations. Critics simply say the problem is not fixable, as anyone who understood how the technology works would have to agree. Hmm. Our focus on Big Data AI and our new obsession with versions (from GPT 4 to GPT 5) means we’re locked into a shitshow that will make a few people wealthy and otherwise stall progress on smarter ideas for getting to intelligence.
The field is hugely divided, and even pioneers of the current tech—transformer-based deep neural networks—argue over whether we’re getting anywhere. One of the “Godfathers of AI,” Chief AI scientist at Meta, Yann LeCun, keeps telling the media and everyone else that LLMs lack an understanding of the real world, and will never lead to AGI. That honor will go to a “next generation” AI that will show actual promise toward flexible general intelligence. Yes, he works for Meta. But Meta invests heavily in the technology, and LeCun has long been recognized as someone who speaks his mind as a scientist. Major dissension and disagreement among the top experts in the field tell you what you already suspected: we’re in another hype cycle.
General intelligence—AGI—is still happening in some fuzzy future world. Since the inception of the field of AI, its practitioners and enthusiasts have maintained that true intelligence is a generation away. Today, with all the progress, it’s still up ahead…. somewhere. Ray Kurzweil sticks to his longstanding date of 2029 (or thereabouts, he now says), while another Godfather of AI, Geoff Hinton, predicted in 2023 that AGI “could be achieved” within 5 to 20 years. Nice. If not five, then we have another 15 years to work with. A survey of 738 AI experts estimated a 50% chance of achieving high-level machine intelligence (what does that mean?) by 2059. 2059? Perfect. Translation: we’re in another hype cycle, which is inevitably followed by another AI Winter.
The Big Kahuna: we’re not even working on it. For decades, researchers in AI have been picking specific problems that require intelligence from humans to solve, then engineering a particular solution using a computer. Thus, the set of narrow tasks we can perform with AI continues to grow, but the progress toward flexible intelligence that can learn to solve any problem—from washing dishes to answering Jeopardy! questions to making scientific discoveries (and forget solving world hunger, inflation, or anything real-world like that)—stays stubbornly flat. Yes, AI can beat the world’s best humans at chess or Go (actually, not Go anymore. Turns out, AlphaGo didn’t know f*ck all about the game of Go.). But that’s not what “AGI” is all about. It’s about building a dynamically learning flexible intelligence. We’re not working on that. Big Reason #5 we won’t get it anytime soon.
Silver lining? Younger researchers (and the old hoary critics) are coming up in the ranks, eager to try new ideas and point us in new directions. Eager to make a name for themselves pushing the frontiers of knowledge. Let’s get them some funding, and prove that we humans aren’t so single-minded and hype-driven after all.
Erik J. Larson
Right to the point. It was and remains naïve to believe that we are anywhere near to AGI. The root cause of this over-trustfulness in an AGI revolution is that humans don’t know themselves. We are continuously exteriorized and are too prone in objectifying reality and have become unable to see/perceive/feel how our own cognition works. Because if we take a first-person perspective, it is easy to realize that our cognition is based on semantics, and the meaning of things is directly related to a conscious experience. You can't know what colors, sounds, tastes, smells, hot and cold, or touch and sight mean. You can’t understand what wetness means unless you have experienced the wetness of water. No matter how large and sophisticated your information processing system is, you will not understand what the image of a street and a human cycling on that street towards a traffic light represents at all. You must have experienced at least something of the environment directly. For example, the weight of your body walking on that street, a conscious and experiential interaction with other humans via sounds, speech, vision and touch, and must have made the visual experience of the redness, yellowness and greenness of the traffic lights. You can't understand a thing without a conscious experience. Not even in principle. You can’t drive a car if you don’t have a semantic understanding of the environment, the street, the cyclist, the traffic lights, etc. There is no reason to believe that a self-driving car could magically be able to understand what even humans can't understand and do until they have a conscious experience of these things. The same fits for whatever AGI narrative. There is a direct relationship between general intelligence and conscious experience. In other words, AGI will never exist unless it becomes conscious because real intelligence needs semantic understanding. Adding another trillion neurons, gazillion of parameters, flooding an AI system with more data, or providing it with even more number-crunching power, won't help. No consciousness, no AGI. After all, if one takes the first-person perspective, it is something that becomes self-evident.
A good summary. But there is one doubt here before we declare a *full* AI winter. Yes, we may not get AGI of anything like it. But we still may not be heading for an AI winter. AI does not need to reach AGI-levels to be disruptive. GenAI introduces the category 'cheap' (in both meanings) in the same way as machine weaving did at the start of the industrial revolution (https://www.bloodinthemachine.com/p/understanding-the-real-threat-generative). So, basic graphic arts and text may be replaced by GenAI (it is already happening). AI as in big data analytics is also still providing useful (and thus meaningful) results.
Besides, as soon as AI winter sets in for what the AI-hype du jour is, the 'AI' moniker gets tainted and is avoided. So, while there is a don't-mention-AI-winter, there is not a full AI-winter. Yann LeCun has said that he called it 'Deep Learning' to avoid the then (AI-winter) tainted AI-moniker, because you would not get funding for anything labeled AI. Guess what is labeled 'AI' now...
I guess we will something like the dot-com crash. The hype is weeded out, the actual useful stuff remains. (And maybe another nefarious big-tech takeover, like what happened with social media added to it.)