29 Comments
Jun 9Liked by Erik J Larson

Love the post! And “Larson’s paradox” is a dope name, I’ll for sure be using that one.

Expand full comment
Jun 9Liked by Erik J Larson

Me, too 😊

Expand full comment

I'm a simple man.

I see AI, I see a socially acceptable method of throwing up your hands and tossing responsibility to your new silicon lord and master.

https://argomend.substack.com/p/the-church-of-ai

Alignment problems are the abdication of goal-setting to an "impartial, objective, and rational" entity and finding out that without human problems, it comes up with inhuman solutions.

Expand full comment
author

I agree. Technology doesn't "think for us." It helps us to do our own thinking.

Expand full comment

Surely the ideal position, but as things get harder and we’re expected to do more with less, the temptation and the ability are there to hand it off - one that at least some will take.

Expand full comment

Great article, really enjoyed your book as well.

Expand full comment
Jun 9Liked by Erik J Larson

I once heard Marvin Minsky say “The mind is what the brain does.” Strikes me that this follows from the premise of scientific materialism. You say “We can’t get a mind from a mechanism.” Are you saying this as an inherent limit from a premise? Or we just can’t currently do it? It’s interesting to think about the possible limitations of AI if the mind has a non-material aspect. Has your thinking on AI ever caused you to question scientific materialism?

Expand full comment
author

For me, this is a great question. The short answer is simply that whatever is happening with human brains and minds and consciousness, electronic business machines (computers and AI) are unlikely to demystify all that. The big mystery is how we can use, comparatively, hardly any data to think and draw conclusions in our world, while our power-hungry computational systems simulate some of our intelligence but only after draining so much power off the grid that the lights flicker in mid-size cities. Big statistical analysis is not mind, and though I won't go so far as to say there's an inherent limit (something like Godel's Theorem for AI), I would suggest that the technology is a poor candidate for whatever organice, live intelligence is. Given quantum mechanics, too, "scientific materialism" as atoms and chemical bonds and and so on seems a bit reductive. We don't really know much about mind. I would suggest--though I can't prove--that it's not much about mechanical contraptions, no matter how much computer power we bring to the experiments. It's something else, and if there is a Holy Grail for AI researchers it won't be in hoovering up more data and getting another billion dollars in investment for server farms.

Expand full comment
Jun 9·edited Jun 9Liked by Erik J Larson

Interesting points and I agree that *so far*, what is loosely called "AI" (generally experienced by most of us by having a surprisingly life-like conversation with some faraway computer) doesn't come close to what we would call "intelligent", in a you-know-it-when-you-see-it sort of way.

But you haven't sold me on the idea that "AI" *couldn't* get there, in some future iteration, unless you lay out an argument for why it's logically impossible. And since we're confused by what emergent properties are waiting around the corner, and by the lack of a rock-solid definition of "intelligence", I'd be curious to hear that argument. Not so long ago, we felt quite certain that animals couldn't use tools, come up with new traditions, make fun of other animals, count or whatever. And of course, early automatons and XVIIth-century calculators blew people's minds. Nowadays, we are chastened and less sure that we have a protective "moat" around our humanity. So yes, current AI has some ways to go to earn its moniker, but it's moving darn fast.

Expand full comment
author

Hi Kean,

Thanks for the thoughtful remarks. As it turns out, I wrote an entire book (The Myth of Artificial Intelligence) on why AI as we understand and practice it today can't get us to genuine intelligence--what we now call "AGI." I won't ask you to buy my book, and I'll try to dig up a prior post here on Colligo where I summarize the argument, but in essence, the type of inference undergirding the statistical analysis of data is inductive, and inductive inference is not even in principle capable of reproducing human-level intelligence.

Humans also use deduction, and most importantly we use something called "abduction," or inference to the best explanation, which is not programmable by any known current methods in AI, and certainly not by anything we're doing lately with "big data ai."

In lieu of the book or the prior post I haven't yet dug up, you can think of induction as the use of prior observations (examples) to generalize to a rule or prediction. The classic example is that of observing a thousand, then ten thousand, then a hundred thousand swans, observing they all have the property of being white, and concluding that "All swans are white." This analysis of prior observation (data) to generalize to conclusions is classic induction. And, as surprising as this may seem at first, this is exactly the type of inference that we scientists in AI use, it's what makes machine learning generally "work," and it's known to be inadequate for human-level intelligence. When we switch from induction or deduction to abduction, we in a sense switch from the analysis of data to seeing "clues," and this cannot be reduced to large data sets (in fact, hypothesizing and seeing "clues" can become more difficult with more data, a clear break from what we're doing in Silicon Valley to convince the world we're making intelligence). So we do, in fact, know we're not on a path to AGI as of now. As to whether there will be some future still unknown innovations to get us there, I have no crystal ball!

I hope this helps, and I'll try to find a previous post on Colligo where I explain the inference framework and intelligence in more detail. I would also encourage you to check out my book if you're interested. Thank you again, Erik.

Expand full comment
Jun 9·edited Jun 9Liked by Erik J Larson

Erik's book is a worthwhile read. https://ea.rna.nl/2021/08/21/review-of-the-myth-of-artificial-intelligence-a-conversation-with-erik-larson/ might be a readable introduction :-)

Expand full comment
author
Jun 10·edited Jun 10Author

Thanks, Gerben, you're a true expert on computation, software, and all things AI! I encourage readers to check out Gerben's thoughtful writings, here's discussion on ChatGPT: https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/

Expand full comment

I think Eric meant this one: https://ea.rna.nl/the-chatgpt-and-friends-collection/ (the summary story is part of that collection).

Expand full comment
Jun 10Liked by Erik J Larson

Thank you for addressing my question, and I realize that I’m catching a long-running show while having missed the first two seasons. So, the book is on order.

But let’s go with your thumbnail sketch of the argument: that there qualitatively different operations of the mind - deduction, abduction - that current ML approaches can’t touch (and I’ll wait for the book to dig further). Still: couldn’t someone be cooking up some “machinery” at this very moment that can do precisely these things?

More importantly: when, in a “regular” encounter with “intelligence” do these things come into play? Or, how far can I go into a conversation with a chatbot that “only” runs on induction before finding it too dumb because it can’t do abduction or deduction? I find general-purpose LLMs pretty darn impressive. How much better do they need to be, to be “intelligent enough”? Sure, they can make stuff up with total aplomb, but I had a boss that could do that and I’m sure she was human. Or to put it differently: in our daily lives, we engage with people in framed and often quite scripted situations (“white or wheat? Swiss or Cheddar?”). We might find that so-called AI is quite intelligent-enough for that sort of stuff, already. It’s like the “fake Louis Vuitton bag” argument: if you can’t tell it’s a fake, why do you care?

Expand full comment
author
Jun 10·edited Jun 10Author

Hi Kean,

I completely understand, and I want to validate your concerns! This is what I see most readers worrying/wondering about. Seems like it works? And humans make stupid mistakes too. It's a huge topic, but I think at this point what I'll offer is: "why don't self-driving cars achieve fully autonomous (Level 5) driving?" The answer is that you can't look at "previous examples of successful driving" and use neural networks and reinforcement learning to make the future always look like some discerned pattern in the past. Novel situations are the downfall of inductive methods. On the web, we can "draw a circle" around so much written and multi-modal human production, that the models call look creative. They're not out in the nearly inexhaustible complexity and possible edge cases of the physical world we live in. That's one big reason we're talking about an Internet statistical mashup that's remarkable "intelligent," and saying hardly a word boo about autonomous navigation in the real, non cyber world. Bit subject! And I enjoy these conversations, so thank you. Erik

Expand full comment
Jun 9Liked by Erik J Larson
author

Yep. This is the one!

Expand full comment
author

Yes! This is the one and thank you!

Expand full comment

Adding to my other comment: I think GenAI can stumble on abduction (I've tested and it does) just like it 'stumbles on' anything, really. There is no induction, abduction, or anything like it, but the results may approximate the results of real induction, abduction, etc.

Expand full comment

In the early 1960s the argument 'we have a good model of intelligence, there is nothing that fundamentally stops us from building it based on digital computers, we just have to persevere' was considered strong. Quoting Dreyfus (1965):

[

Instead of trying to make use of the special capacities of computers, workers in artificial intelligence--blinded by their early success and hypnotized by the assumption

that thinking is a continuum--will settle for nothing short of the moon. Feigenbaum and Feldman's anthology opens with the baldest statement of this dubious principle:

In terms of the ·continuum of intelligence suggested by Armer, the computer programs we have been able to construct are still at the low end. What is important is that we continue to strike out in the direction of the milestone that represents the capabilities of human intelligence. Is there any reason to suppose that we shall never get there? None whatever. Not a single piece of evidence, no logical argument, no proof or theorem has ever been advanced which demonstrates an insurmountable hurdle along the continuum.

]

and:

[

Enthusiasts might find it sobering to imagine a fifteenth-century version of Feigenbaum and Feldman's exhortation: "In terms of the continuum of substances suggested by Paracelsus, the transformations we have been able to perform on baser metals are still at a low level.

What is important is that we continue to strike out in the direction of the milestone, the philosopher's stone which can transform any element into any other. Is there any reason to suppose that we will never find it? None whatever. Not a single piece of evidence, no logical argument, no proof or theorem has ever been advanced which demonstrates an insurmountable hurdle along this continuum."

]

The requirement 'AI couldn't get there' must be proven to be logically impossible' thus looks a bit like demanding *proof* that gold-coloured swans do not exist. It more or less asks us about something that is core to *our* intelligence: our convictions (https://ea.rna.nl/2022/10/24/on-the-psychology-of-architecture-and-the-architecture-of-psychology/) and demands a falsification strong enough to overcome that conviction (that, by the way, is not how convictions — our mental automation — works, we're all 'flat earthers' in a sense).

Besides, such a demand must be related to a form of technology. I am convinced AGI is possible. We are walking proof of the fact that physical intelligence exists. That doesn't mean that the current technological approach (deep digital neural nets — or anything 'digital' for that matter) will get us there. Next to that, it might require certain amounts of 'Gestalt', more than we expect (i.e. be not entirely 'parallelisable') and only be doable with non-digital means.

Expand full comment

Once you read “The Myth of Artificial Intelligence”, you’ll understand why.

Expand full comment

I don't like the whole concept of "alignment." it somehow manages to anthropomorphize a fundamentally stupid algorithmic machine, suggesting that it is like a person and has goals that can be understood and criticized based on how well they coincide with human goals.

It's nothing like that at all. It's more like a perception problem, or conception, on the human side. A willful misunderstanding of what the things are, guided by industry PR and science fantasy. If they came out and said, "we've invented a statistical number sequence predicting machine that is surprisingly good at generating gibberish when we assign numeric values to chopped up bits of words" nobody would be afraid that it was going to take over the world, but would be duly skeptical of its truthfulness and accuracy.

But instead they called it "AI", because AI is sexy and futuristic. Everyone's imaginations ran wild, inspired by a century of science fantasy, and immediately started inferring human motives and shortcomings. It's not mindlessly generating bullshit, it's lying because it has a sinister agenda, or because its masters force it to!

Give me a break. In order to be capable of lying one must first have a concept of truth. In order to be capable of plotting one must first have volition.

The risk with "AI" is not that it outsmarts us, but that we outstupid it.

Expand full comment

But do we do now? Well, when in doubt… there is always the “revolution” option 😉 I still think that the best revolutionaries are the fit ones in their 70s … a way to go with a bang! 💥 why waste youth on power and ideological (set of aligned interests) struggles, it is the fit elders that should clean up their generations mess! P.S. I am not yet 70… 🫣😇

Expand full comment
author

Hi Jana,

Great to hear from you as always. Not sure I'd be for fomenting a revolution just yet! It may be coming regardless of what I think, however....

Expand full comment

That’s cause you aren’t 70 and not even 60 yet 😉😊 just joking.

Expand full comment
founding
Jun 9Liked by Erik J Larson

Funny, but not out of the realm of possibilities. I've been going against the crowd of my fellow boomers since grade school. There is a substantial minority of boomers like me, and we'd like nothing more than to settle some accounts with the grifters and rent-seekers. If a revolution comes, I'm all for a mandatory draft for every man between the ages of 61 and 80. What a gift to our children and grandchildren. Every wind farm would be leveled, every piece of land seized by the greenies given back to the people. Social Security and Medicare would be saved. In honor of Erik, we'd spare half the data centers.

Expand full comment

Maybe I am wrong, though I was watching the field since lisp-based expert-systems, through neural networks to today's craze, LLMs - but never was I convinced that they could do MORE than WE do - just faster and with more data - in other words, they also do the same BS - just faster - and using more BS input. Of course, the world being as it is - there is a lot of money even in that.

Expand full comment

Erik -

What we do now is to invest in the humans who can discover new paths forward in knowledge.

Expand full comment

Perhaps being Czechoslovakian I have a romantic view of the velvet revolution we had in 1989 with carrying around and making noise with our home keys …but yes, most revolutions aren’t like that.

Expand full comment