30 Comments

A few years from now, I'll be able to proudly say: "I was lucky enough to find Erik J Larson's blog during the AI hype. Yes, *that* Larson, Larson's paradox etc. He was right all along."

What you explain in such a clear way seems so true I sometimes worry there must be some caveat to it :D Seriously though, I still cannot understand why you're still the only one to date I've encountered who talks about abduction (put so nicely in today's piece "the hard problem"). Maybe others purposefully ignore it because it's too hard?

Expand full comment

Thank you!

Hmmm. It's interesting about "abduction." My understanding of what someone like Yann LeCun is gesturing toward with his talk about the importance of understanding causality, is just another way of referring to abduction. The community seems to prefer big picture themes: "understanding," "causal reasoning," "common sense understanding," and so on. These are all subsumed under some type of inference, presumably, and the only think I can think of would be something like abduction! But the term perhaps is associated more with philosophy and Peirce's ideas than something new and applicable to AI. Also, researchers DID work on abductive reasoning in years past, with Abductive Logic Programming and so on. And more recently the Allen Institute of AI did some work on it under the guise of Bayes nets. I call all of this "abduction in name only," because it's simplified in one way or the other. So, yes, I think the issue is just ignored because it's too hard, or it's captured by the other terminology.

Expand full comment

Worthy of being widely read.

Expand full comment

Thanks, Gerben!

Expand full comment

And thank you for the wonderful phrase "engineering the hell out of it." I love that!

Expand full comment

Erik,

Fantastically timely for me (I know how important that is to you!). Seriously, you directly address questions I had on reading the Gary Marcus post with which you begin, and which inform a conference I'm planning on attending. And, as one of the commenters says, it really does deserve to be read widely. I will respond/incorporate in my own substack, but for now, two thoughts.

Let me make a devils advocate point on behalf of the legions of folks calling for neurosymbolic approaches: might we hope that probabilistic neural approaches, based on large data sets of human interaction, might inform guesses at relevance? Probably not true inference, but workable mostly, at least enough to prompt more formal methods?

Second, and more deeply, maybe its not a bad thing that we have no idea how to get to AGI properly understood? Do we really want AGI, and if so, why? Other than "next" and we are slaves to notions of material progress.

That is, might we imagine a more humane version of technology/culture (towards which you've been thinking/writing) as expressed just along this divide between "problems we can engineer the hell out of" and general problems, problems of meaning, usw.?

At any rate, fantastic work. More anon. Thanks.

Expand full comment

Hi David,

I'll take a stab at reproducing my comment to you that I somehow erased. This is the gist of it anyway!

On your first question, I'm reminded of Galileo's nifty reductio of Aristotelian physics (heavier objects fall faster). Let A be a small rock. B a larger rock. C = A + B.

Now assume Aristotle's physics.

A falls more slowly than B because it's lighter. Conversely B falls faster than A because it's heaver.

Tie them together to get C. Now:

C should fall more slowly than B because A is now slowing down B (A falls more slowly).

C should fall faster than B because A has made C heavier than B.

But that's a contradiction! There must be something wrong with the premise.

Not an exact fit, but.... Anyway the worry is simply that the mishandling neuro system errors results in a greater number of errors, total. In cases where you can tailor fit the symbolic system to the problem, as with the International Math Olympiad, you can be reasonably assured that A + B = C will be truthier than A or B alone. In other cases, C might be less truthier, because the errors passed between subsystems got propagated and then multiplied.

In any case, the claim about hybridization of AI involves generality. We want to build systems that move us along the path toward general intelligence (so the line of thinking goes). In this case even given the SUCCESS of some hybrid system C (it solved the "error propagation" problem), there's no argument for its generality. In fact, if it solve the error propagation problem, it's likely because it's NARROWLY fit to the problem. That's moving in the wrong direction, clearly.

Hybrid systems are ubiquitous. Good systems don't propagate errors. But again, they're also narrowly fit to the problem/domain.

I completely agree with your second question (or what your second question implies). I think it was former editor of Wired and science fiction writer Bruce Sterling who said, decades ago, that human-level intelligence is a pointless technological goal because there's no business case for it. We already have 8 billion awesome human brains (some aren't so awesome...). What we tend to want with AI is superior performance on some dedicated task that we no longer want to do, or that would increase profit if performed by technology rather than a person who farts and leaves early and demands health benefits. In other words, the business world would love to replace workers with cheaper alternatives, by off shoring or by building robots or what have you (if Martians could be used, big corporations would use them). These aren't general intelligences (who might start complaining about working conditions after all). They're technological replacements fit to specific areas of concern (like a manufacturing assembly line). So it's unclear we even WANT general intelligence. It might always be cheaper and more technically feasible to design technology for specific purposes, which means it will ALWAYS be "narrow."

Lastly, typically AGI enthusiasts argue that getting AGI means a coming "intelligence explosion"--and who wouldn't want one of those? (Or: the end of the world is nigh.) But the argument that AGI brings an intelligence explosion is suspicious to me. [It goes like this:] First, we can make copies of the AI (at very little cost, if it's just code). Then those copies can start looking at each other and reverse engineer how they work (we've been staring at the brain for centuries. Our general intelligence doesn't seem to shine a light on how our brains think). Then they can engineer "smarter" versions of themselves. Right. Here's my question: HOW??????!!!!! How does something with intelligence "A" reverse engineer itself and then create intelligence B, where B > A? We have no examples of this anywhere in nature. What magical powers give AGI the ability to make something smarter than AGI? This idea that AGI will somehow "get smarter" than humans once it reaches human level intelligence is a kind of shibboleth with the sci fi AI crowd. No one has ever produced a convincing argument for the claim, let alone a blue print. It just sits there sure as God made little green apples :)

I hope I answered your questions! I love these lines of thought!

Expand full comment

Erik,

Thanks for this. I may ask you for permission to fold some of it into a future Intermittent Signal.

With regard to your last point: my guess is that the proposition that, as you phrase it, "getting AGI means a coming "intelligence explosion"" has several roots. For a few human generations now, computing power has increased with processing speed. Later computers are more powerful, not exactly AGI but sorta "smarter," no? Second, as we've discussed, LLMs in particular have sort of assumed that neural nets plus data scale would somehow (HOW? you will say) lead to "intelligence" as a sort of emergent phenomenon. As is now clear, whatever emerges is so data, compute, energy hungry that it is very difficult from human intelligence. Third, maybe more deeply/less consciously, there seems to be a sort of Darwinian intuition. Complexity will beget further complexity. But we seem to have no mechanism in CS akin to mutation or even sexual recombination to explain how changes in kind would be brought about purely mechanistically, i.e., without intervention of human designers, God in the machine as it were.

Expand full comment

Great comments thanks David. I just wrote out a longish comment to you. It showed up twice, I deleted one--so I thought--and now they're both gone. Alas. I'll try to rewrite it later!

Expand full comment

Erik,

I was at lunch with family, waiting on server, glanced over your long reply, which seemed very good, was planning to reread with care . . . and then it was gone. I thought it was sent to my email, but no . . .

Anyway, I hope you do find time to repost it. Although maybe not. In my memory, the lost post of Erik growing in stature, now right behind some of the missing works of Aristotle :)

Expand full comment

lol.

I really did write out a long reply to you! I’m glad that you saw it! It’s gone. I have to rewrite it. Thanks, David the way that you look at issues is really important to me.

Expand full comment

As an interested outsider, I am puzzled by why the "the hard problem" is a problem. If the existing method, aggregating kludges to solve important or cool problems, works well, why is it necessary to try to create a mechanical or electronic replica of the operation of the human brain. Is it just that a manufactured brain, which I take "AGI" to be, would be cool? Do researchers think important problems will be solved that cannot be solved now with the combination of nonbrain-like computing plus human brains? We have bio-brains in large quantity as it is, which evolved over billions of years, and some of them work pretty well. What drives the apparent urgency to make an artificial one? Why is that a Holy Grail? This is a sincere though naive question. What am I missing here?

Expand full comment

Hi Contarini,

Great question. I think AGI enthusiasts assume that achieving human-level intelligence will bring an "intelligence explosion," which will give us "superintelligence." Superintelligence is a bit like having God-like cognitive powers, so the argument is that it's obvious we want control of that, because we can use it to solve currently intractable problems. To my mind this is a silly and one-sided take on problem solving. How would it work for solving world hunger? We have as you put it really smart people right now, and it's not an IQ test to solve actual socio-political issues. It takes marshalling support, achieving a broad consensus, getting funding (which would involve possibly going out to lunch with the funders!), and importantly a focus on other technologies, like climate-friendly fertilizers and advances in irrigation. Assuming a Big Brain will do all this is just kicking the can down the road and forestalling our own progress.

I think there's also a "cool" component to the question, as in "can we design a system that's smarter than the designer?" I take this point, wearing a scientific hat. It's the question that obsessed Dr. Frankenstein and it does have a visceral fascination to it: are we THAT smart that we can design something that replicates our smarts? My answer is "Yeah, that's interesting in the abstract. Agreed." No problem in folks giving it a go. But let's not announce victory when it's clear that decades of trying aren't getting us even close. A reasonable position is simply that it's not feasible with computational technology (maybe biological tech?!!!). The other practical issue is that while we're playing Dr. Frankenstein (unsuccessfully), what other scientific problems are lacking funding and attention (back to world hunger, or...)? It's not like we have infinite resources and talent.

I think finally folks think, understandably, that achieving AGI would create a cool sci-fi world for us. This is the flying cars scenario. I don't know what to say here, I can't police people's imaginations, but dreaming isn't engineering. If we want to create supersmart robots or flying cars, we should do what engineers do: evaluate options, try the most promising approaches. So far we seem to be filling the world up with gadgets, so the dream isn't reality.

One more thing:

LLMs are an interesting case study because they can answer pretty much any question you put to them, sometimes incorrectly and sometimes wildly off the mark, and they can also do superhuman tasks like summarizing a thousand pages of a legal brief. Still, they don't have this visceral excitement to them. We don't see labor productivity on the rise--why not, if we're working with these computer brains?--and we see signs now that investors are getting spooked, and that the LLM phase may be a bubble. So I don't think we have good intuitions about how AI would actually make the world better if it keeps getting "smarter." Maybe it makes the workforce and world "dumber"?

It may be that what creates the maximum "overall problem solving ability" is educating humans and finding best-fit technologies given interests and problems. If the goal is for me to do arithmetic quickly, accurately, and tirelessly, I'd go with a calculator (like the one on your phone). If the goal is to catch terrorists, I'd go with focused technologies used by seasoned and effective human analysists, together with diplomacy in certain parts of the world and so on. So I think to put it bluntly--dreaming is one thing, what actually makes the world better and solves problems we care about is another.

This is a big picture question, I love it. I think your instincts on how something's "off" are spot on!

Expand full comment

Thank you for this substantial response. You could probably use it as the first draft of a short post, entitled something like “Answering the Question from the Peanut Gallery“!

My sense is that AI will progress along a path of actual application to actual problems, and be incremental rather than explosive. It will be a tortoise and hare scenario. People trying to build an AI “hare“ will spend a lot of energy trying to build something poorly defined, without a clear use case, because it is cool. That may lead to useful things along the way, and hey, maybe they will get to El Dorado. And the vision of something really spectacular may motivate energy, and funding. But the way to make AI better and better and more and more useful will more likely be countless improvements and kludges and refinements, until, some number of years from now, the tortoise will have racked up a lot of steady miles, and people will look up and say, wow, this whole suite of applications we have now, taken together, have transformed the world, it was not some clearly identifiable moment of breakthrough.

That is more like how the history of general purpose techologies, which I take AI to be, have actually worked.

Incidentally, I am writing a novel with an AI component to the plot, I have my characters avoid overly-hyped approaches to solving their problems. They build up a massive, high quality data set of interactions, then build up massive, fast processing power, and just build up a facsimile of human behavior by trial and error. On the outside the user says, wow, convincingly human, on the out-of-view side, what is happening bears no resemblance at all to human intelligence, let alone AGI. Your response seems to align with my layman’s understanding, so I will call treat that some indicia of vindication!

Looking forward to future posts from you, now that I have stumbled across your site.

Expand full comment

I can’t guarantee anything, but I can get you plugged in to publishing if you have a good manuscript. Peace.

Expand full comment

Thanks, appreciate the offer, and, of course you can’t promise anything. I hope to have a draft I could show to other people within a year. I do not have an agent, I know nothing about that world. So I will certainly get in touch with you then. Meanwhile, I will be following you on Substack, looking forward to many great posts from you.

Expand full comment

When you get a manuscript if you don’t have an agent let me know.

Expand full comment

Kudos and best of luck on novel. Very important.

Expand full comment

I’ve always thought that Science Fiction was more about people than about machines. The fact that we write fiction I think it’s just wonderful. And it’s not easy! Let me know how it turns out please.

Expand full comment

Will do, sir.

Expand full comment

And my question is unfair since it is well below the knowledge level of you and the readers of your SubStack. But if you have written something on point, or know of something, at a lower "reading level" I would appreciate a link.

Expand full comment

I bought a copy of your book. That may do it!

Expand full comment

Don’t worry about that, I also came like without much prior knowledge :) welcome to Erik’s Substack, we have a lovely little community here :)

Expand full comment

Thank you, sir.

Expand full comment

If you’re looking for some quick but solid technical background, Gerben Wierda did a great guest post here some time ago: https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and?utm_campaign=posts-open-in-app&triedRedirect=true, I also very much recommend his 45min video on the topic https://youtu.be/9Q3R8G_W0Wc :)

Expand full comment

As a mathematician, the Alphaproof system at first sight "seemed" to me genuinely a progress. Theorem provers never got any close to solving ingenious problems like those of the international Olympiad of mathematics. On other hand, you are perfectly right that the system cannot be nowhere near AGI. So I am very skeptic that Alphaproof is not using hidden knowledge, that is, that it learned mathematics all by itself as Deepmind claims. Even Alphago, as first step, was trained on actual games. Alphaproof took an average of one day to solve a problem at the level of IMO, so training the system on problems closer and closer to that complexity should require dozen of years. What are google engineers failing to tell?

Expand full comment

Wow, thanks for this Federico! This is good stuff.

Expand full comment

Another insightful post that goes to the root of the problem. Your line of reasoning reminds me of a paper of Perry Marshall who argued that the difference between the living and the non living is that the former is able of inductive reasoning while the latter is limited to deductive reasoning only: https://www.sciencedirect.com/science/article/pii/S0079610721000365

Expand full comment

This looks like a good paper, Marco. I'm interested in the biological roots of cognition and the notion that we can't get a full reduction seems spot on. Thanks for sending.

Expand full comment

Erik's attention paid to abductive reasoning (in his book and in general) is an important contribution.

Expand full comment