8 Comments

Wonderful that Mr Marcus responded.

Also curious that I was bothered by the typos. Very authentically human, though.

What I keep finding confounding is that Mr Marcus keeps pushing a “I was right all along” narrative. For my taste, there is too much “I” and too much about being right in this.

Who needs to be saved here, and from what? Investors who are pouring money into AI, mostly manifesting in the form of LLMs? Let them do what they like. We could simply assume, they are smart, or hire smart people to vet their investments.

Also, more pedantically and technically speaking, why can’t we focus more on what we actually know? It is Nov, 18th, so… how can we know what is or is not going to happen until the end of the year? Maybe OpenAI will release their next model. That is still what they say they plan on.

As for the “wall”, there is more than one take on this. For one, the quoted people are merely saying that naive scaling of pre-training may no longer be the best way to scale. But then there is inference time compute and that has definitely an effect.

See https://www.linkedin.com/posts/peter-gostev_i-was-watching-a-talk-by-the-nvidia-ceo-and-activity-7264031391821008897-Ii9c

Some (fairly smart) people say that o1-preview marks a crude start to, and is a paradigm shift.

And as we discussed recently somewhere here in the context of Erik’s post, overall, there are many people who are getting a ton of value out of LLM technology – in its current state.

OpenAI is also not LLMs. After all, while pretty ambitious, they aim to build AGI. If THAT is going to work out, is a much more consequential and much more speculative question. We might live to see.

If Mr Marcus turns out to be right, I am just not sure what has been gained.

If people approach LLM technology with a healthy dose of understanding how it’s working, what its inherent limitations are, and are curious about how to further explore its capabilities, we might just get somewhere, as in: why not embark on a constructive and productive journey instead of repetitively pointing out that it won’t work.

Here’s where I’d challenge Mr Marcus:

"For over 30 years I have argued for integrating neural networks (of which LLMs are the most popular current example) with classic symbolic AI. "

OK, so just build it. How hard can it be? Since you are maybe THE expert on this particular perspective, and its OG, and you have a huge following, how difficult is it really to work on that, get some funding or however you want to go about doing it?

I mean, LLM tech is being pushed forward very rapidly. So that half of your problem is solved by the very mechanism you are criticizing. Just take the fruits of that “fool’s errand”, leverage the falling cost ("price wars” you predicted), and work on that neurosymbolic combo. Let’s go.

Then post some updates about progress and how you are going to build powerful AI that is actually reasoning and all that.

You made your point. Time to build.

Expand full comment
author
Nov 19·edited Nov 19Author

Hi Nico,

What amuses me is the sudden interest in denigrating "LLMs" as if they weren't a natural progression of machine learning. You can draw a rough but accurate line from the early 2000s when companies first learned to group and classify content, serve contextual ads, and eventually improve search with machine learning methods to LLMs. LLMs ARE machine learning!!!! lol. It's just what happens when we add more data and more compute. So bitching about LLMS is essentially indicative of (a) the person hates machine learning and wishes for some throw back technology like symbolic AI or typewriters or what have you or (b) the person is unaware that LLMs just are machine learning and so the critique is very broad and perhaps ill considered indeed. AS for Gary, I can't speak for him, and it's clear he knows his technical information about LLMs and has been writing about it for many years now. But again, I find it interesting that LLMs are somehow sectioned off from "AI" on the web (data-driven AI) and treated like some boogeyman. LLMs are just machine learning that works better. Are there problems? Yes, because they work so well they deliver answers that we might have otherwise questioned but now take for granted. These sorts of cultural issues I think are very important, but the critique of "AI" today fails to acknowledge the facts on the ground. That's a bad start, so I'm trying to correct it. Thanks for your message.

Expand full comment
Nov 19Liked by Erik J Larson

I would say that discussions about any AI/LLM wall, and especially an “AGI/commonsense wall,” have to be based on a closer look at and a far better understanding of the brain and perception, i.e., why the brain is so entirely different from an AI/computer. Take a simple event like “stirring coffee with a spoon.” In the framework of ecological psychology (that of J. J. Gibson) the event is described by invariance laws – it has an “invariance structure.” Here’s a partial list:

• A radial flow field (an array of velocity vectors) defined over the swirling liquid

• An adiabatic invariant carried over a haptic flow field re the spoon, i.e., a ratio of energy of oscillation to frequency of oscillation (Kugler, 1987).

• An inertial tensor defining the various momenta of the spoon, and specific to the spoon’s constant length and mass (Turvey & Carello, 1995, 2011).

• Acoustical invariants

• Constant ratios relative to texture gradients (the table surface, the cup as it turns) and flows for the form and size constancy of the cup and the cup (or eye) moves over the surface,

• A ratio (“tau”, the time derivative of the inverse of the relative rate of optical expansion) related to our grasping of the cup (Savelsbergh et al., 1991).

• And more…

Note, this is multi-modal event; the invariants are amodal – they are coordinate across modalities as the event is ongoing – the acoustical clinking/frequency changes are coordinate with adiabatic change, with the inertial change, and on – an intrinsic “binding” of all aspects of the event.

The brain’s response (with its integral tie to the body’s systems for action) to this event, where the invariants are preserved over time, over the continuously changing event (a continuity, note, which is NOT a mathematically dense continuity of points, instants, “states”) – this response Gibson was forced to view under a “resonance” metaphor – the brain resonating, also continuously, to this structure, and “specific to” or “specifying” the perceived, external, ongoing, coffee stirring event.

This invariance structure underlies our knowledge of “coffee stirrings,” and of course our understanding of sentences, e.g., “The man stirred the coffee with the spoon.” How this event structure is “stored” in the brain is a massive problem, but the word-vector spaces of the LLM’s are obviously nothing like this event – they bear no resemblance to this dynamic structure - in effect being but an extremely ramped-up version of 18th century associationism. Inertial tensors, adiabatic invariants are not going to be stored in a vector space.

AI hallucinations, from this perspective, seem another variant of the frame problem. We are stirring the coffee and suddenly the coffee liquid rises up in a column, say, two inches above the cup’s rim, then falls back, then another “pulse” back up two inches, then down... For a robot, is this pulsing column an expected event while stirring coffee? In the symbolic AI world, this was once considered a matter of checking one’s frame axioms – the huge list of things remaining unchanged during the event, e.g., the cup’s form, the spoon’s length, the kitchen floor’s stability, the current president of the US… For the human, this liquid-pulsing event will simply not “resonate” with the all the rest of our [stored] experience of coffee stirrings. It is an instant, felt dissonance (yes, an intrinsic intentionality). ChatGPT, however, will likely “solve” this too, if asked, “Is something wrong here?” via its vector space (one can imagine the answer!). But what if its training set contained enough sentences/docs containing whimsical descriptions of coffee stirring events with pulsing coffee columns? This would be incorporated as part of the AIs coffee stirring “knowledge,” but this knowledge is obviously not anything like the human, ecological experience (knowledge) of coffee stirring, with all its forces, etc.

This is near the core of the LLM problem with commonsense, but it is just the beginning of the differences if one takes a closer look at the science of perception; it is much worse. AI is actually marching blithely along with no theory whatsoever on the origin of our image of the external world – the coffee cup “out there,” spoon stirring (a time-extended perception). That is, AI has (and thinks it needs) no theory of perception. Ever-changing bits in a computer is not a model of how we have an image of the external world. Gibson says the brain is “specifying” the event, is “specific to” the event. But what this means is far from the same as the computer model or anything the computer model could do (an article on this “specification” in the journal, Ecological Psychology, 2023 – “Gibson and Time”). This is ultimately an enormous difference underlying the “knowledge” structure of AI vs. humans, and the origin of the image problem is foundational to the operation of our imagination – another thing AI feels it can do without.

Then there is the problem of the brain’s specification of a scale of time – at our normal scale, a “buzzing” fly, winging past the coffee cup, wings ablur. But this is just one of many possible scales of time at which the external matter-field can be specified. How about a “heron-like fly,” slowly flapping his wings (and concomitantly the coffee swirls slow down, the spoon circles more slowly, a drinking glass gets more vibrant, conversation slows…). This has been hypothesized to be the bio-chemical effect of LSD – for note, the brain is indeed a bio-chemical mass, and this is routinely ignored by the AI/computational metaphor (another article: in the journal, Psychology of Consciousness, 2022 – “LSD and perception”). Imagine the difference implied for the nature of the “device” that the human is vs. the computer/AI. Would an LLM now have to have a vector space for every possible scale of time?

In any case, these are just few of the problems, and critics like Gary Marcus are simply not conversant with (it seems) or dealing with this dimension. That’s my two cents.

Expand full comment
Nov 18Liked by Erik J Larson

God blessings shine through you. 🙏❤️🇺🇸✝️

Expand full comment
founding

Dr. Larson, I have thought about thanking you ever since I read The Myth of Artificial Intelligence, soon after it was first published. I think George Gilder recommended it. You introduced me in depth to the magic of abduction and Charles Peirce, which has been an enormous help to my own thinking and writing. I know little about the technical details or history of generative AI, so I was interested to learn about the current debate on its future.

After reading the response by Gary Marcus, the first thing I did was something I was taught more than 50 years ago by my Chief of Medicine when I was an intern: Check the assertions. In those days that meant going to the medical library and manually searching out the references, then seeing if they held water. Simple, but very time-consuming. But that's the way I was trained, and I can't help myself, so I checked out the only (I think) reference in Marcus' critique, to wit: The economist Brad DeLong recently argued “Misinformation decided US election. Polling data show that Donald Trump’s supporters were deeply misinformed about most of the campaign’s defining issues.”

So I went to the linked article, and actually read it. I was sorely disappointed, disappointed that Mr. Marcus would actually use this source as evidence to bolster his argument. It is easily-demolished drivel, one that would not have passed peer-review in a more rigorous time. My old chief, Dr. Fred, would have eviscerated Mr. DeLong at morning report for using this poll as "evidence." And I do mean eviscerate, but figuratively of course. As he said to me once in front of all the other interns, "Steve, we are not here to resect your testicles, merely to inspect them." Those were the days before CAT scans and MRIs and the amazing blood tests now available. Diagnostic decisions were truly life and death situations, and our patients did die or needlessly suffer if our thinking was disordered.

Back to the cited article: Trump supporters were supposedly ignorant of the fact that illegal border crossings had diminished markedly in the months prior to the election. The implication was that if they had known that, they might have voted for Harris. The other assertions were of a similar type, implying that Harris voters knew stuff that Trump voters didn't. Actually, I I am quite certain that Trump voters knew something more important: That the administration deliberately clamped down on illegal border crossings in order to help them win the election. There was no way they were going to vote for someone so obviously crass and immoral. I know those people; I listened to them for half a century, and they are not stupid, nor are they ignorant of the important things.

It is really unfortunate that Mr. Marcus chose such a political partisan to make a political point. It reflects quite badly on his effort, and taints the remainder of his rebuttal. I much prefer your argument, and your intellectual honesty in making it.

Thank you again for helping me enjoy my senescence.

Steve Atcheson

Expand full comment

Erik, wow, so much to disagree with, question, etc., so little bandwidth. For now, serious respect for just publishing the response. Bravo.

Expand full comment

I chide Gary for never denying AGI. If someone was serious about AGI, they would define it.

Expand full comment
author

Hi Simple John,

Yeah. You know the AGI issue is tricky. In the end I think what we're discovering is that AI can't ever be "general" like a human brain because as you imply, the question is not well-defined. AI will get more powerful, but it's not a brain.

Expand full comment