24 Comments

Right to the point. It was and remains naïve to believe that we are anywhere near to AGI. The root cause of this over-trustfulness in an AGI revolution is that humans don’t know themselves. We are continuously exteriorized and are too prone in objectifying reality and have become unable to see/perceive/feel how our own cognition works. Because if we take a first-person perspective, it is easy to realize that our cognition is based on semantics, and the meaning of things is directly related to a conscious experience. You can't know what colors, sounds, tastes, smells, hot and cold, or touch and sight mean. You can’t understand what wetness means unless you have experienced the wetness of water. No matter how large and sophisticated your information processing system is, you will not understand what the image of a street and a human cycling on that street towards a traffic light represents at all. You must have experienced at least something of the environment directly. For example, the weight of your body walking on that street, a conscious and experiential interaction with other humans via sounds, speech, vision and touch, and must have made the visual experience of the redness, yellowness and greenness of the traffic lights. You can't understand a thing without a conscious experience. Not even in principle. You can’t drive a car if you don’t have a semantic understanding of the environment, the street, the cyclist, the traffic lights, etc. There is no reason to believe that a self-driving car could magically be able to understand what even humans can't understand and do until they have a conscious experience of these things. The same fits for whatever AGI narrative. There is a direct relationship between general intelligence and conscious experience. In other words, AGI will never exist unless it becomes conscious because real intelligence needs semantic understanding. Adding another trillion neurons, gazillion of parameters, flooding an AI system with more data, or providing it with even more number-crunching power, won't help. No consciousness, no AGI. After all, if one takes the first-person perspective, it is something that becomes self-evident.

Expand full comment

Hi Marco,

I couldn't agree more. There's a kind of "firstness" to intent or purpose, it's not "constructed" from something more basic. This stuff is slippery, I realize, but I agree that organic or natural intelligence seems to occur in a different context or "dimension" than a reductive material account would allow for. Just like philosophers offer pain as an example of something you can't be cognitively confused about, agency is properly basic like that as well. This is a problem for engineering computational systems to not just APPEAR smart but to actually BE smart. Thanks for your comment!

Expand full comment

A good summary. But there is one doubt here before we declare a *full* AI winter. Yes, we may not get AGI of anything like it. But we still may not be heading for an AI winter. AI does not need to reach AGI-levels to be disruptive. GenAI introduces the category 'cheap' (in both meanings) in the same way as machine weaving did at the start of the industrial revolution (https://www.bloodinthemachine.com/p/understanding-the-real-threat-generative). So, basic graphic arts and text may be replaced by GenAI (it is already happening). AI as in big data analytics is also still providing useful (and thus meaningful) results.

Besides, as soon as AI winter sets in for what the AI-hype du jour is, the 'AI' moniker gets tainted and is avoided. So, while there is a don't-mention-AI-winter, there is not a full AI-winter. Yann LeCun has said that he called it 'Deep Learning' to avoid the then (AI-winter) tainted AI-moniker, because you would not get funding for anything labeled AI. Guess what is labeled 'AI' now...

I guess we will something like the dot-com crash. The hype is weeded out, the actual useful stuff remains. (And maybe another nefarious big-tech takeover, like what happened with social media added to it.)

Expand full comment

Hi Gerben,

Good to hear from you. I haven't read the full article you provided, but I subscribed and started reading it and will respond here with some thoughts along the way.

First, yes, I agree. No AGI but no Winter is a possibility. Here's why I think it's just a logical possibility and not an empirical one. We have zero evidence so far that the diffusion of foundational models have led to labor productivity increases.

If having the equivalent of super smart digital servants for your boring desk job isn't increasing productivity, the question is why? Why isn't this breakthrough technology "revolutionizing" the world?

Robotics makes steady but maddeningly slow progress; notice how it's not making the news cycle much these days. As of 2024, the best robot systems in the word load a dishwasher about as well as a six year old child. So, what jobs are threatened by AI? In the blue collar world, probably jobs we didn't want: painting the door of a skeleton vehicle in an automobile assembly facility and so on.

In the white collar world, since ALL LLMs are inveterate bullshitters--I use ChatGPT 4 sometimes for research when it would actually add value, I like it yes, but it will invent stuff happily, and to my detriment if I'm not watching for it--the company has to pay a human to review the output of the LLM. THAT process might actually end up costing companies more money than simply having dumber but more controllable technology, and smarter, more educated, and more competent humans. We want the tech to make our jobs and lives easier. Replace what? Where?

By my lights, AI seems to be leading to job creation and industry expansion. Just.... in the wrong direction for our best future.

Thanks for comment as always!

Expand full comment

GenAI might be a technology that find niches where it is 'good enough' (as whole or as part of a larger setup). Brian Merchant's comparison with weaving machines is pretty apt. Most cloth is woven by machines now and when that started the quality was pretty shoddy. In hundreds of years, the machines did not become good enough to actually sew all or clothes (we have sweat shops in Asia for that) or fold clothes. But weaving has gone as an artisan's job. GenAI seems to be capable of that level of performance and free market enterprise is going to make sure the option will be used. So, companies (read Brian's post) are for instance under pressure to replace their graphics artists with mostly GenAI. Klarna is the poster child for companies that reportedly already have done this. Go to a conference, you'll see more GenAI generated 'cheap imagery' that you will like (a backlash is to be expected, though).

There are good reasons to be convinced it will not get very far. For one, how much scaling options we still have is a big question mark. How much niches there are that can be conquered by 'good enough' is also a question mark. How much we accept 'cloning human skills (often poorly)' is also a question mark for the long term. As always with the future, there are many questions, few answers.

PS. "If having the equivalent of super smart digital servants for your boring desk job isn't increasing productivity, the question is why?" Because that is AGI.

Expand full comment

I read Merchant's piece. He does a good job of slightly but not overly frightening everyone lol. This is just the over capitalized one percent, and he's right. If they need to show more women and minorities, suddenly we have women and minority CEOs, while the voting board members never change. If it's take your dog to work day, and it might affect productivity, your boss made a million dollars from the Humane Society or what have you.

So yes, "good enough" might work. But I would bet everything I own (which isn't much) that the total jobs in ten years will be favorable to today. And a lot of those jobs will be "white collar." The real battle isn't with AI--because it can't be trusted, no boss will bet the farm on it replacing her best analysts, etc etc etc etc--it's with where jobs are moving given the global supply chain. I've started to notice something that I haven't quantified yet: wherever "AI" goes, rich people get richer, and no one "cares" (they do) because there's actually more jobs. Let's pivot: we're creating bullshit jobs at a phenomenal pace. But employment is not the right button to push. Anyway much of what he says I agree with, but his claim in my mind misses the mark.

Expand full comment

Yes, late stage capitalism is indeed a problem. Though earlier stages had their issues as well. The oil and steel barons of the early 20th century are like the tech barons of today. And guess how they generally stood with respect to a rule-of-law democracy...

Expand full comment

Right, but there's a pretty obvious difference between the two cases. Weaving machines don't occasionally make a dress out of prickly pear skins. "Hallucinations" aren't a "good enough" response in industry, they're a deal killer.

Expand full comment

I don't think the 'failed approximations' of GenAI are by definition a deal killer. There are many deals where a 'good chance of success' is good enough.

https://ea.rna.nl/2018/04/04/something-is-still-rotten-in-the-kingdom-of-artificial-intelligence/ you can skip to "Great expectations, round 2. Failure ahead?". This is pre-GenAI, but still valid.

"But sometimes it is enough for statistical methods to have very small effects to be useful. So, say you want to influence the US elections. You do not need to convince everybody of your message (fake or not). You maybe can swing an election a very worthwhile 0.2 percent by sending negative messages about a candidate to a selected group of his or her supporters. Suppose you can target a specific 22 percent, say black people who support your opponent for 90 percent. You get 50 percent of them to see a message that puts the candidate in a racist or antiblack context. If this suppresses the turnout of 80 percent of those that saw the message by 10 percent while it increases the turnout of 5 percent of that same group by 60 percent (as they are really angry about the unfairness of the message), then you have just created a 0.2 percent lower poll result for your opponent. A couple of such differences may win you elections. This is not farfetched. Such weaponized information has been used in the US 2016 election and in the Brexit referendum, where very small effects like these have apparently had a big effect on the outcome. It gets even better the more ‘micro’ the targeting becomes."

Expand full comment

I don't disagree. I'm trying to widen the discussion about jobs to include why this supposedly "revolutionary" technology isn't really causing a revolution. The Economist ran a piece on the "missing" revolution. It's quite good, and it adds some texture. I don't think the productivity discussion ends the "good enough" tit for tat we've been having, but it is interesting that, to date, there's very little data suggesting anything but more of the same:

"What Happened to the Artificial Intelligence Revolution?

So far the technology has had almost no economic impact."

https://www.economist.com/finance-and-economics/2024/07/02/what-happened-to-the-artificial-intelligence-revolution?https://www.economist.com/finance-and-economics/2024/07/02/what-happened-to-the-artificial-intelligence-revolution?utm_content=section_content&gad_source=1&gclid=CjwKCAjwzIK1BhAuEiwAHQmU3iUQPVqdRFNUnD7RWauQiHDoBETfuy-YEUyJFyp3KfeJ479ahk3qjxoC4LgQAvD_BwE&gclsrc=aw.ds

Expand full comment

I am just trying to be more careful than I was when I criticised the dot-com-hype nonsense in the late 1990s 😀. There is a lot of nonsense now that can be rightly criticised, but I don't want the nonsense to blind me to what is realistic (like Brian's observation, which I think is a brilliant addition to the discussion). Still, I fully expect a dot-com style crash, because the nonsense is ... nonsense.

Expand full comment

Thanks for this overview, Eric. I haven't closely followed developments over the last few years, but all this rings true. The hurdles Marco lists here seem unsurmountable, for reasons Dreyfus and other Heidegger-influenced, phenomenologically-oriented critics of AI pointed out 50 years ago. One way to put the matter is that the world most originally "shows up" for an intelligent agent as affordances. Agents have INTERESTS, which color the world in hues of good and bad. Machines do not. Ultimately, having interests is rooted in mortality, and a preference for life over death. A can opener is indifferent to being in a state of good function, or being broken -- that distinction only arises for someone else, who has a purpose in using it. That purpose provides a teleology for the can opener.

Per Heidegger, there is another layer to our world-apprehension when we overlay experience with concepts and representations, for example when we substitute the entities posited in mathematical physics for first-order reality. To get mathematical physics off the ground, you have to begin with idealizations such as the perfect vacuum, the point mass, the perfectly elastic collision, etc.. Heidegger calls this "a projection of thingness which skips over the things." Needless to say, AI can only traffic in such projections. This would seem to be the fundamental limit AI comes up against. Because, as Jaron Lanier once said, "what makes something real is that it can't be represented to completion." Your model will always be incomplete, and probably in ways that will turn out to be consequential. Of course, none of this will prevent Artificial Stupidity from getting woven into our common life and making things even more dysfunctional than they already are. There is just too much wealth-transfer made possible by cowing credulous journalists with talk of inevitability, and tapping into the anti-humanism of our prevailing anthropology.

Expand full comment

Hi Matt,

Your points are well put. I certainly agree. As I was saying to Marco (he gets doubly referenced!), there's a "firstness"--to borrow from Peirce--about consciousness and mind, and agency is also basic like this. It's the basic connection to the universe through mind, this is something that I think may prove in the end impossible to engineer, as it as it were sits "underneath" the very engineering project. Keep writing on agency! I get wonderful clarity and insight from your stuff.

Expand full comment

Yeah I've been trying to tell the hype boosters, starry-eyed junior devs, and low info tech optimists this since the first impressive LLMs reached the public years ago. But they never listen. We're already past peak hype and I still have people talking to me matter-of-factly about what AI is going to do for them in a few years, making serious plans and investments around it as if it's a done deal. Nevermind that I know more about it than anyone else they listen to (and I only work on ML as side projects currently). They're going to listen to the techbro on youtube who has been wrong about literally everything he's ever said, or the tech journos who can't work their iPhones and are just punching clocks waiting to find agents for their manuscripts.

This was a totally avoidable self-own on the part of the tech industry and AI research in particular. All they had to do was rein in the PR spin doctors and set reasonable expectations. But they couldn't help themselves. Why sell some novel, marginally useful software when you can sell computer magic?

Expand full comment

Well put!

Expand full comment

Observing the tech-industry for some time it seems completely natural that it lives on hype - which keeps the venture capital rolling in, the entrepreneurs rich and an army pf tech-workers enthused - so now for AI as the new big thing - and everyone is so busy that no one asks the question: What's it all REALLY good for?

Expand full comment

I couldn't agree more. I wonder if we're shaping our society around what the AI we've built is good at, forgetting that it might not be beneficial everywhere and their might be better ideas.

Expand full comment

I know I sometimes sound nihilistic, anarchistic or utopian - but actually as an engineer I appreciate technology and innovation - as natural to man. It has always been that way - long before the industrial revolution and the IT-revolution building on it - and most important discoveries in mathematics, physics/astronomy, medicine, chemistry came from the East (Middle-East over Persia, India to China). Now one may argue that these countries missed the bus on the industrial/IT-revolution - but even leaving aside that these traditional sources of innovation were typically laid waste by colonization - the character of innovation was different in the past - because the society-structure and economic-system were different. What really explains our current "modern civilization" is the concentration of property (starting with land) in few hands and using that advantage to multiply the property, be it by colonization, industrialization, innovation - the driver being profit and growth for the elite - indeed spawning new elites for new disruptive technologies - which the old elites then gravitated to as fast-followes. As a result technologies developed mainly to aid this, be it mass-production, automation, AI (and they are all inter-related) - all intrinsically serving to expropriate and marginalize large parts of society, so the elite can be done with them when they can't be exploited more (WEF vibes here). That's the driver for AI, too - get rid of knowledge-workers so you don't have any hassles with them - it's even better than outsourcing. All the dreamy-eyed hype of superhuman intelligence better than us that becomes a new God - is just PR. Note that in this line of reasoning not all modern technology is bad, e.g. medicine, transportation or IT in it's basic function of enabling communication and accessing knowledge - but the main driver is deeply evil - and no one questions it. It's not that difficult to conceive of criteria characterizing "good technology", e.g. they should offer meaningful NEW or BETTER benefits, they should provide meaningful work - versus disrupting society, disempowering/marginalizing/replacing humans in their traditional tasks (which one can consider their due right), generating immense wealth for a small elite. Note also that all of this reflects in geopolitics, too - that exploitative system reaching out across the globe (versus beneficial trade having taken place to share abilities/resources since pre-historic times) and needing military power to assert and sustain it.

Expand full comment

I think that's what it is too. I know people who are in that 1% (not me)! They're not bad people, but there's a deep structural issue with late stage capitalism where it's having these winner take all effects that are not healthy. I totally get it.

One way I look at the question of "AI" is as a huge agglomeration of human interests. All the folks putting out the latest AI are, quite literally, billionaires. Note how "AI" is used in Wall Street to concentrate money. I've been grabbing this thorny branch where the questions are about centralization: why is "AI" doled out to folks from central points in the network? Why don't we treat AI as science and fund it in academic labs, in garages with new ideas, everywhere. In a sense we still do this, but this is really an era of the 1%. I keep writing about this. The question of "AI" is complicated by basic human issues.

Expand full comment

Don't get me wrong, AI is a natural human innovation - one I am also interested in not only from my engineering side but also my philosophical side. But leaving aside the toxic shape our capitalist culture is giving it (including seeing it as a weapon in conflicts IT ENGINEERS) there is IMO a deeper structural issue: Part of the toxic degeneration of our culture is that - we are simply all much dumber than our forefathers when it goes beyond the limits of our knowledge - creativity and deeper understanding more or less lost - just read the Upanishads and the Mahabharata and you would know what I mean. Thus, simple questions relating to how our cognitive system works (intelligence, consciousness, character) have REGRESSED to a laughable state in modern philosophy, psychology, and neuroscience - all lost in meaningless details and complex hypothetical models of very limited value. If one accepts this premise it should be clear that the basis on which AI-development is taking place - is woefully inadequate - but still enough to make a big business of it.

Expand full comment

Right. As a science it's laughable. The human brain vastly outperforms AI, and uses the energy of a standard light bulb. It's "still enough to make a big business of it," exactly as you said! That's why we have to stand up and speak and write about these issues! Thank you, Martin!

Expand full comment

I commend your optimism and activism - but more realistically we are ALL in need of clarifying what we are, what we want - and can do, also in specific areas like AI but always in the context of life in general - each for himself - and then maybe in a larger context, gaining synergies and critical-mass for change - all as a never-ending learning-process (possibly/ideally shared). OTOH, that "larger context" is quite immune to such a more thoughtful approach - and will at best listen if you come up with something that might improve their money-making machine - or reduce risks to it (and them).

Expand full comment

You should see biotechnology!

Expand full comment

Tell us!

Expand full comment