Discussion about this post

User's avatar
Marco Masi's avatar

Right to the point. It was and remains naïve to believe that we are anywhere near to AGI. The root cause of this over-trustfulness in an AGI revolution is that humans don’t know themselves. We are continuously exteriorized and are too prone in objectifying reality and have become unable to see/perceive/feel how our own cognition works. Because if we take a first-person perspective, it is easy to realize that our cognition is based on semantics, and the meaning of things is directly related to a conscious experience. You can't know what colors, sounds, tastes, smells, hot and cold, or touch and sight mean. You can’t understand what wetness means unless you have experienced the wetness of water. No matter how large and sophisticated your information processing system is, you will not understand what the image of a street and a human cycling on that street towards a traffic light represents at all. You must have experienced at least something of the environment directly. For example, the weight of your body walking on that street, a conscious and experiential interaction with other humans via sounds, speech, vision and touch, and must have made the visual experience of the redness, yellowness and greenness of the traffic lights. You can't understand a thing without a conscious experience. Not even in principle. You can’t drive a car if you don’t have a semantic understanding of the environment, the street, the cyclist, the traffic lights, etc. There is no reason to believe that a self-driving car could magically be able to understand what even humans can't understand and do until they have a conscious experience of these things. The same fits for whatever AGI narrative. There is a direct relationship between general intelligence and conscious experience. In other words, AGI will never exist unless it becomes conscious because real intelligence needs semantic understanding. Adding another trillion neurons, gazillion of parameters, flooding an AI system with more data, or providing it with even more number-crunching power, won't help. No consciousness, no AGI. After all, if one takes the first-person perspective, it is something that becomes self-evident.

Expand full comment
Gerben Wierda's avatar

A good summary. But there is one doubt here before we declare a *full* AI winter. Yes, we may not get AGI of anything like it. But we still may not be heading for an AI winter. AI does not need to reach AGI-levels to be disruptive. GenAI introduces the category 'cheap' (in both meanings) in the same way as machine weaving did at the start of the industrial revolution (https://www.bloodinthemachine.com/p/understanding-the-real-threat-generative). So, basic graphic arts and text may be replaced by GenAI (it is already happening). AI as in big data analytics is also still providing useful (and thus meaningful) results.

Besides, as soon as AI winter sets in for what the AI-hype du jour is, the 'AI' moniker gets tainted and is avoided. So, while there is a don't-mention-AI-winter, there is not a full AI-winter. Yann LeCun has said that he called it 'Deep Learning' to avoid the then (AI-winter) tainted AI-moniker, because you would not get funding for anything labeled AI. Guess what is labeled 'AI' now...

I guess we will something like the dot-com crash. The hype is weeded out, the actual useful stuff remains. (And maybe another nefarious big-tech takeover, like what happened with social media added to it.)

Expand full comment
22 more comments...

No posts