This is a well-written essay and it’s clear he’s thought deeply about the topic. But I think he’s falling into a familiar trap: expecting too much from LLMs. He critiques them for lacking memory, persistence, and goal orientation. And he can hardly be blamed.
AI has been overpromised by hype men for more than a decade now. We were told to expect autonomous agents and digital second brains. What we got are extremely good text generators. Larson’s disappointment, I think, stems from that mismatch between the promise and the reality.
If we stop expecting LLMs to carry our minds and treat them as modular reasoning tools, things usually work out surprisingly well.
I’m not sure we need a whole new philosophy. Maybe just clearer boundaries and better-designed interfaces.
One more thought: systems that shift too much cognitive load onto users often fail.
Because the average user won’t (or can’t) maintain rules or tweak context contracts. We’ve seen this in educational tech, UI design, even productivity software.
AHI assumes a level of user initiative that, for most people, just isn’t there. And when systems overestimate their users, they get quietly ignored. That’s the real risk here.
Imagine your grandmother trying to manage persistent context settings across shifting epistemic contracts. Now imagine trying to explain that sentence to her.
Hi Tumithak of the Corridors, this is a nice critique. For what it's worth, I agree with you! I was thinking through this and thought I would throw it out as food for thought. I think it's going to put too much cognitive load on the user as well. By the way I'm writing a book with this title and this is not the same content. This is just me thinking out loud.I do think it's worth thinking these issues through so I hope people interact with the post.
Thanks for what you are doing to keep an accessible non-scientist conversation going about AI as we struggle living in the addictive, digital space. I do wonder if the passion over AI is a kind of substitute for meeting an intrinsic need of contact or experience with the numinous i.e., the whole of life we are phenomenologically bound to, reduced to "parts," but in a reverse direction that sort of fantastically resurrects the machine without calling it religion. Do you find some of its proponents almost like believers devoted to a religious identity?
It concerns me what it might mean that we so easily equate our experience of consciousness with machine learning. IOW, not just that we are intensely curious about what is going on with AI, but how so many of us seem to pefer the digital space and its reduced psychological messiness and easy rewards, yet might need a "god" or something characterizing mystery for comfort, or to justify how easily we are consumed with it. What's the difference between a VR video game dopamine rush with no real risk of vulnerability, and a life altering near death experience or a broken heart that no technology can mend?
Are we mezmerized because of AI, or have we lost something of ourselves in the digital space and haven't noticed? Is that capture coincidental - psychologically speaking, to the digital space we now live in? If we've lost touch with something existential that imparts unconsciously our sense of meaning for common struggles, then no wonder AI receives such devotion.
Or maybe I'm fooling myself and the hype really has potential horror in it beyond being a massive suveillance tool. Of course, how would I even know?
Here's an interview with one of the authors of a recently published book: "If anyone builds it, everyone dies: Why Superhuman AI would kill us all." I haven't seen the full interview, but the title suggests that AI is a kind of looming monster - which is also for me another version of how easily we are captured by the narrative that gives us something mysterious, if horrifying.
A very attractive idea, and very elegant. I would love to see these ideas and this critique of agentic AI in dialogue with Gary Marcus’ own, which is so convergent.
Sounds very interesting! Do you have a technical implementation in mind, or is it conceptual at this point? (=would current LLMs play any part in the AHI?)
We keep hearing that AI will lead to massive job loss. Basic economic theory, meanwhile, tells us that the more capital per worker, the more valuable the worker: People don't flee machine heavy economies for no-machine economies in search of jobs, in other words.
So, in theory AI should make us each more valuable, and lead to more value creation in the economy. Larson here is just keeping this in proper focus. If our goal is some weird Sci-Fi future where we want to make believe that machines have souls, then by all means keep on the current path. But if we imagine a more humane future where humans use the AI tool, then AHI is the way to go.
For one thing, this will take less energy, memory, and computing power, all of which presumably have a cost. Instead of investing the resources in the Great Pretend, design an AHI that is purpose-built to work with us, not instead of us. At least, this is what I hear Larson arguing here.
Will people prefer to work with a fake machine soul over a more compliant and cooperative AHI? Although that really depends on the person, I can see in my own case that the fakery aspect of AI tools is already becoming wearisome.
Perhaps it's like the difference between a TV sitcom designed for continuous but dumb, easy laughs, and a long-running show that really engages you on a deeper level. Both have a role in my life, depending on mood and context. But one is obviously a higher work of creativity, and will inspire more conversation and thought.
This is a well-written essay and it’s clear he’s thought deeply about the topic. But I think he’s falling into a familiar trap: expecting too much from LLMs. He critiques them for lacking memory, persistence, and goal orientation. And he can hardly be blamed.
AI has been overpromised by hype men for more than a decade now. We were told to expect autonomous agents and digital second brains. What we got are extremely good text generators. Larson’s disappointment, I think, stems from that mismatch between the promise and the reality.
If we stop expecting LLMs to carry our minds and treat them as modular reasoning tools, things usually work out surprisingly well.
I’m not sure we need a whole new philosophy. Maybe just clearer boundaries and better-designed interfaces.
One more thought: systems that shift too much cognitive load onto users often fail.
Because the average user won’t (or can’t) maintain rules or tweak context contracts. We’ve seen this in educational tech, UI design, even productivity software.
AHI assumes a level of user initiative that, for most people, just isn’t there. And when systems overestimate their users, they get quietly ignored. That’s the real risk here.
Imagine your grandmother trying to manage persistent context settings across shifting epistemic contracts. Now imagine trying to explain that sentence to her.
Hi Tumithak of the Corridors, this is a nice critique. For what it's worth, I agree with you! I was thinking through this and thought I would throw it out as food for thought. I think it's going to put too much cognitive load on the user as well. By the way I'm writing a book with this title and this is not the same content. This is just me thinking out loud.I do think it's worth thinking these issues through so I hope people interact with the post.
Hello Mr. Larson.
Thanks for what you are doing to keep an accessible non-scientist conversation going about AI as we struggle living in the addictive, digital space. I do wonder if the passion over AI is a kind of substitute for meeting an intrinsic need of contact or experience with the numinous i.e., the whole of life we are phenomenologically bound to, reduced to "parts," but in a reverse direction that sort of fantastically resurrects the machine without calling it religion. Do you find some of its proponents almost like believers devoted to a religious identity?
It concerns me what it might mean that we so easily equate our experience of consciousness with machine learning. IOW, not just that we are intensely curious about what is going on with AI, but how so many of us seem to pefer the digital space and its reduced psychological messiness and easy rewards, yet might need a "god" or something characterizing mystery for comfort, or to justify how easily we are consumed with it. What's the difference between a VR video game dopamine rush with no real risk of vulnerability, and a life altering near death experience or a broken heart that no technology can mend?
Are we mezmerized because of AI, or have we lost something of ourselves in the digital space and haven't noticed? Is that capture coincidental - psychologically speaking, to the digital space we now live in? If we've lost touch with something existential that imparts unconsciously our sense of meaning for common struggles, then no wonder AI receives such devotion.
Or maybe I'm fooling myself and the hype really has potential horror in it beyond being a massive suveillance tool. Of course, how would I even know?
Here's an interview with one of the authors of a recently published book: "If anyone builds it, everyone dies: Why Superhuman AI would kill us all." I haven't seen the full interview, but the title suggests that AI is a kind of looming monster - which is also for me another version of how easily we are captured by the narrative that gives us something mysterious, if horrifying.
https://rumble.com/v6z6so4-why-superhuman-ai-would-kill-us-all-researcher-warns-of-existential-ai-risk.html?e9s=src_v1_sa%2Csrc_v1_sa_o%2Csrc_v1_ucp_v
Any chance you and that author could have a panel discussion?
A very attractive idea, and very elegant. I would love to see these ideas and this critique of agentic AI in dialogue with Gary Marcus’ own, which is so convergent.
Sounds very interesting! Do you have a technical implementation in mind, or is it conceptual at this point? (=would current LLMs play any part in the AHI?)
We keep hearing that AI will lead to massive job loss. Basic economic theory, meanwhile, tells us that the more capital per worker, the more valuable the worker: People don't flee machine heavy economies for no-machine economies in search of jobs, in other words.
So, in theory AI should make us each more valuable, and lead to more value creation in the economy. Larson here is just keeping this in proper focus. If our goal is some weird Sci-Fi future where we want to make believe that machines have souls, then by all means keep on the current path. But if we imagine a more humane future where humans use the AI tool, then AHI is the way to go.
For one thing, this will take less energy, memory, and computing power, all of which presumably have a cost. Instead of investing the resources in the Great Pretend, design an AHI that is purpose-built to work with us, not instead of us. At least, this is what I hear Larson arguing here.
Will people prefer to work with a fake machine soul over a more compliant and cooperative AHI? Although that really depends on the person, I can see in my own case that the fakery aspect of AI tools is already becoming wearisome.
Perhaps it's like the difference between a TV sitcom designed for continuous but dumb, easy laughs, and a long-running show that really engages you on a deeper level. Both have a role in my life, depending on mood and context. But one is obviously a higher work of creativity, and will inspire more conversation and thought.