Discussion about this post

User's avatar
Eric Dane Walker's avatar

The philosopher Iris Murdoch once defined the human being as the animal that makes images of itself and then comes to resemble those images.

Ever since Turing was fatefully misunderstood, intellectuals have mythologized the ability to compute as the height of human intelligence, and so humanity has in too many respects become more and more computer-like.

If you've talked with a customer service representative following a bureaucratically begotten script on the other end of a customer service phone line, you've experienced a human being occupying the same uncanny valley as LLMs. (This isn't the representative's fault, of course.)

Expand full comment
Ondřej Frei's avatar

Thank you Erik for explaining to me why I’ve been feeling so “off” from interactions with the tool! I thought “uncanny valley” only refers to visuals until now.

There are so many aspects to unpack, it’s almost overwhelming: eg the model using words like “I”, “understand”, “conscious effort” etc.

Another super confusing aspect: any time the model says it’s doing something (e.g. chain of thought), that in no way means it’s actually doing anything like that! It often kind of seems like it is doing it indeed, whereas it works simply by how it’s following up on the whole session’s context and simply calculating the next token over and over again.

This is extremely tricky for a lot of humans to grasp, because it’s totally unlike what we do when we use chain of thought.

Expand full comment
12 more comments...

No posts