Discussion about this post

User's avatar
Tumithak of the Corridors's avatar

This is a well-written essay and it’s clear he’s thought deeply about the topic. But I think he’s falling into a familiar trap: expecting too much from LLMs. He critiques them for lacking memory, persistence, and goal orientation. And he can hardly be blamed.

AI has been overpromised by hype men for more than a decade now. We were told to expect autonomous agents and digital second brains. What we got are extremely good text generators. Larson’s disappointment, I think, stems from that mismatch between the promise and the reality.

If we stop expecting LLMs to carry our minds and treat them as modular reasoning tools, things usually work out surprisingly well.

I’m not sure we need a whole new philosophy. Maybe just clearer boundaries and better-designed interfaces.

One more thought: systems that shift too much cognitive load onto users often fail.

Because the average user won’t (or can’t) maintain rules or tweak context contracts. We’ve seen this in educational tech, UI design, even productivity software.

AHI assumes a level of user initiative that, for most people, just isn’t there. And when systems overestimate their users, they get quietly ignored. That’s the real risk here.

Imagine your grandmother trying to manage persistent context settings across shifting epistemic contracts. Now imagine trying to explain that sentence to her.

Expand full comment
Ondřej Frei's avatar

Sounds very interesting! Do you have a technical implementation in mind, or is it conceptual at this point? (=would current LLMs play any part in the AHI?)

Expand full comment
2 more comments...

No posts