4 Comments
User's avatar
Tumithak of the Corridors's avatar

This is a well-written essay and it’s clear he’s thought deeply about the topic. But I think he’s falling into a familiar trap: expecting too much from LLMs. He critiques them for lacking memory, persistence, and goal orientation. And he can hardly be blamed.

AI has been overpromised by hype men for more than a decade now. We were told to expect autonomous agents and digital second brains. What we got are extremely good text generators. Larson’s disappointment, I think, stems from that mismatch between the promise and the reality.

If we stop expecting LLMs to carry our minds and treat them as modular reasoning tools, things usually work out surprisingly well.

I’m not sure we need a whole new philosophy. Maybe just clearer boundaries and better-designed interfaces.

One more thought: systems that shift too much cognitive load onto users often fail.

Because the average user won’t (or can’t) maintain rules or tweak context contracts. We’ve seen this in educational tech, UI design, even productivity software.

AHI assumes a level of user initiative that, for most people, just isn’t there. And when systems overestimate their users, they get quietly ignored. That’s the real risk here.

Imagine your grandmother trying to manage persistent context settings across shifting epistemic contracts. Now imagine trying to explain that sentence to her.

Expand full comment
Erik J Larson's avatar

Hi Tumithak of the Corridors, this is a nice critique. For what it's worth, I agree with you! I was thinking through this and thought I would throw it out as food for thought. I think it's going to put too much cognitive load on the user as well. By the way I'm writing a book with this title and this is not the same content. This is just me thinking out loud.I do think it's worth thinking these issues through so I hope people interact with the post.

Expand full comment
Ondřej Frei's avatar

Sounds very interesting! Do you have a technical implementation in mind, or is it conceptual at this point? (=would current LLMs play any part in the AHI?)

Expand full comment
Timothy G. Patitsas's avatar

We keep hearing that AI will lead to massive job loss. Basic economic theory, meanwhile, tells us that the more capital per worker, the more valuable the worker: People don't flee machine heavy economies for no-machine economies in search of jobs, in other words.

So, in theory AI should make us each more valuable, and lead to more value creation in the economy. Larson here is just keeping this in proper focus. If our goal is some weird Sci-Fi future where we want to make believe that machines have souls, then by all means keep on the current path. But if we imagine a more humane future where humans use the AI tool, then AHI is the way to go.

For one thing, this will take less energy, memory, and computing power, all of which presumably have a cost. Instead of investing the resources in the Great Pretend, design an AHI that is purpose-built to work with us, not instead of us. At least, this is what I hear Larson arguing here.

Will people prefer to work with a fake machine soul over a more compliant and cooperative AHI? Although that really depends on the person, I can see in my own case that the fakery aspect of AI tools is already becoming wearisome.

Perhaps it's like the difference between a TV sitcom designed for continuous but dumb, easy laughs, and a long-running show that really engages you on a deeper level. Both have a role in my life, depending on mood and context. But one is obviously a higher work of creativity, and will inspire more conversation and thought.

Expand full comment