Discussion about this post

User's avatar
Ondřej Frei's avatar

Thank you Erik! (The cleaned-up version reads nicely).

This is very interesting, thanks for providing us with this insight into your work. I have a meta-observation or a question maybe - along the course of your interaction with the LLM, it must have been extremely hard to not succumb to our very human intuition and start thinking there’s “something alive”. You’re in a very good position thanks to your expertise as an AI scientist, so I think you can actually tremendously benefit from the technology without falling for it. But after experiencing the interaction intensely for two weeks now, do you think the first person style the LLM communicates in might be “too much” for normal people, meaning that after some point they’ll just start believing it’s “alive” or “conscious”? I read a very interesting proposal by an AI researcher, where she suggested to replace the anthropomorphized words coming from the LLM with more objective ones, akin to replacing “as an LLM, I don’t understand” with “as a LLM, this model cannot represent” (I’m paraphrasing heavily).

What do you think? Should the models be trained to feel less human?

Expand full comment
Ondřej Frei's avatar

Hello Erik! TBH I’m not quite sure how to read this piece, I find the multiple colons difficult to follow (who is “you”, also at what point ChatGPT is saying something in a new paragraph starting with something like “compelling protagonist” and a colon again - is that still what ChatGPT said…?) Some different indentation would possibly help, although I’m not sure if Substack offers that (I’m reading on mobile now, maybe desktop displays it better?)

Expand full comment
18 more comments...

No posts