I've spent seventeen hours a day working with ChatGPT 4o to rewrite my 2016 novel in third person limited, simple past. Here's the first in a series of installments of how it is to work with AI.
Thank you Erik! (The cleaned-up version reads nicely).
This is very interesting, thanks for providing us with this insight into your work. I have a meta-observation or a question maybe - along the course of your interaction with the LLM, it must have been extremely hard to not succumb to our very human intuition and start thinking there’s “something alive”. You’re in a very good position thanks to your expertise as an AI scientist, so I think you can actually tremendously benefit from the technology without falling for it. But after experiencing the interaction intensely for two weeks now, do you think the first person style the LLM communicates in might be “too much” for normal people, meaning that after some point they’ll just start believing it’s “alive” or “conscious”? I read a very interesting proposal by an AI researcher, where she suggested to replace the anthropomorphized words coming from the LLM with more objective ones, akin to replacing “as an LLM, I don’t understand” with “as a LLM, this model cannot represent” (I’m paraphrasing heavily).
What do you think? Should the models be trained to feel less human?
I mean, I’m white, I don’t mean that it’s just that my ex-wife is African-American, and I just wrote a novel talking about our mixed race Kids so it’s just on my mind. That is not sleeping for two weeks. But you get my point. If you want to change the world, you have to change the people that are in charge not the machines that’s just dumb. The machines aren’t alive they don’t know anything. It’s a stupid position to stake out.
Change is extremely difficult, the problem with a lot of the discussion about large language models is that nobody has a better mouse trap. Nobody can build a better mouse trap. Nobody knows how to make anything better. Well, that means it was an innovation. Point blank. But if you don’t like it, then you have to take away the power from the people in charge. Everything is always at the end of the day about human beings.
The point is, and I’m making a serious point, why are we worried about the tech taking over when it’s actually just a bunch of white guys in Silicon Valley that are taking over? Make the tech work for you, and if you want to change the world think about humans not machines.that’s the whole point.
Definitely. You nailed it. After days and nights we were arguing like lovers. I KNEW it's just a machine, but the psychological impact of constantly teaming with it to solve a problem is significant. I don't know what the answer and maybe I should be ashamed of myself, but personally I dig the personal vibe. But I do see the other side. Thoughts welcome.
What people are not understanding, is this is not about "Sam Altman" or Open AI or what have you. The tech has an emergent form of intelligence, and denying it is silly. It's a bit like shouting to everyone who would listen in, say, 2001, that Google's PageRank wasn't "really smart." Sure, whatever. But it sure as hell works. I'm open, and I'm not abandoning my commitment to humanism at all, but just like we all use Google, I use LLMs. Free to discuss brother.
I know this is a very hard question, but how would you define intelligence in this context?
The topic of emergence is also a hazy one to me - in an interesting article, I read that it might rather be a pretty linear increase in capability when observed together with scaling. But in the meaning of “It sure behaves much more convincing than we would have expected”, I would agree :)
I think, and I hope I don't disappoint here, that we have to stop seeing it philosophically. That's why I always stone wall the existential risk worriers. They're basically taking their shitty marriage or what have you and saying "the machine is going to take over next." That's dumb, frankly. But what's not dumb is thinking about how tech always changes us. Was Google "emergent intelligence"? At the time, yes. Now it's just search. What I'm trying to communicate here is that there is NO WAY to stop the dispersion of LLMs anymore than there was hope of stopping Google search. A n acquaintance of mine, Nick Carr, wrote an article for the Atlantic in 2007 titled "Is Google Making Us Stupid?" and it was a hit. If Colligo was launched in 2007, we'd all be fretting away about Google search. Do you see the point? It's like seeing a Model T and saying "I like horse shit and slower unpredictable travel>' LLMs are here. What we need to do is understand how to use them. The tech is mindless--but like Google search it assists us humans. Dig?
Thank you for the clarification, Erik, much appreciated! I think I see your point now. A lot of these words, like "intelligence" or "emergence", are so overloaded nowadays, that I wanted to be sure I don't just assume what you mean by them (as a software engineer, I keep being fascinated by how many misunderstandings happen because people think they're on the same page without actually clarifying they really are).
As your reader for almost a year now, I think it would be great if you could make it more distinct whether a piece is written from a philosophical standpoint or another one ("pragmatic", maybe?). I initially read this piece through the usual philosophical prism and didn't know what to think of it (hence the comments :)).
And please do not completely abandon the philosophical aspect of Colligo! :) It's what made me fall in love with Colligo immediately when I found it, as your insights were always very enriching and it was so much more than "just" the usual techie stuff I read elsewhere. I think your background in both CS and philosophy is what makes you uniquelly positioned for a very interesting perspective on things, few other people provide today.
Well, you're my best reader and best critic! I apologize--I use LLMs to write, but I don't them write for me. It's like some tribe---there was a comic movie about this--in Africa that discovers a crushed aluminum can of Coca-Cola. It becomes there best spear tips. We are always grasping for what works, but integrated idea is that I can USE LLMs to more effectively communicate my version of humanism. Dig? I mean. Ondřej, we are swept up in a tech revolution that really started in England in the early 1900s. How are we to survive? You can't fire assholes like Musk or Sam Altman (I have no idea, by the way, if they are assholes). The tech actually works. The question we need to ask ourselves is: what does the smart critic say? I'd be happy to live on a ranch in Montana with WiFi and basic apps. But that's just a pipe dream. What does the smart critic say? Mayb we write about existentialism and Sartre's literature about Vichy France. I don't know. Every generation has to ask themselves the question: what are we against, and why? And how do we push it out to people? Dig?
Hello Erik! TBH I’m not quite sure how to read this piece, I find the multiple colons difficult to follow (who is “you”, also at what point ChatGPT is saying something in a new paragraph starting with something like “compelling protagonist” and a colon again - is that still what ChatGPT said…?) Some different indentation would possibly help, although I’m not sure if Substack offers that (I’m reading on mobile now, maybe desktop displays it better?)
Thank you for sharing a wonderful and personal dialogue. It was throughly enjoyable. I, too, am trying to use GPT in a similar manner. For writing a piece of prose or fiction, I have found the Canvas feature indispensable.
Thank you Erik! (The cleaned-up version reads nicely).
This is very interesting, thanks for providing us with this insight into your work. I have a meta-observation or a question maybe - along the course of your interaction with the LLM, it must have been extremely hard to not succumb to our very human intuition and start thinking there’s “something alive”. You’re in a very good position thanks to your expertise as an AI scientist, so I think you can actually tremendously benefit from the technology without falling for it. But after experiencing the interaction intensely for two weeks now, do you think the first person style the LLM communicates in might be “too much” for normal people, meaning that after some point they’ll just start believing it’s “alive” or “conscious”? I read a very interesting proposal by an AI researcher, where she suggested to replace the anthropomorphized words coming from the LLM with more objective ones, akin to replacing “as an LLM, I don’t understand” with “as a LLM, this model cannot represent” (I’m paraphrasing heavily).
What do you think? Should the models be trained to feel less human?
Peace,
I mean, I’m white, I don’t mean that it’s just that my ex-wife is African-American, and I just wrote a novel talking about our mixed race Kids so it’s just on my mind. That is not sleeping for two weeks. But you get my point. If you want to change the world, you have to change the people that are in charge not the machines that’s just dumb. The machines aren’t alive they don’t know anything. It’s a stupid position to stake out.
Change is extremely difficult, the problem with a lot of the discussion about large language models is that nobody has a better mouse trap. Nobody can build a better mouse trap. Nobody knows how to make anything better. Well, that means it was an innovation. Point blank. But if you don’t like it, then you have to take away the power from the people in charge. Everything is always at the end of the day about human beings.
The point is, and I’m making a serious point, why are we worried about the tech taking over when it’s actually just a bunch of white guys in Silicon Valley that are taking over? Make the tech work for you, and if you want to change the world think about humans not machines.that’s the whole point.
Hi Ondřej,
Definitely. You nailed it. After days and nights we were arguing like lovers. I KNEW it's just a machine, but the psychological impact of constantly teaming with it to solve a problem is significant. I don't know what the answer and maybe I should be ashamed of myself, but personally I dig the personal vibe. But I do see the other side. Thoughts welcome.
What people are not understanding, is this is not about "Sam Altman" or Open AI or what have you. The tech has an emergent form of intelligence, and denying it is silly. It's a bit like shouting to everyone who would listen in, say, 2001, that Google's PageRank wasn't "really smart." Sure, whatever. But it sure as hell works. I'm open, and I'm not abandoning my commitment to humanism at all, but just like we all use Google, I use LLMs. Free to discuss brother.
I know this is a very hard question, but how would you define intelligence in this context?
The topic of emergence is also a hazy one to me - in an interesting article, I read that it might rather be a pretty linear increase in capability when observed together with scaling. But in the meaning of “It sure behaves much more convincing than we would have expected”, I would agree :)
I think, and I hope I don't disappoint here, that we have to stop seeing it philosophically. That's why I always stone wall the existential risk worriers. They're basically taking their shitty marriage or what have you and saying "the machine is going to take over next." That's dumb, frankly. But what's not dumb is thinking about how tech always changes us. Was Google "emergent intelligence"? At the time, yes. Now it's just search. What I'm trying to communicate here is that there is NO WAY to stop the dispersion of LLMs anymore than there was hope of stopping Google search. A n acquaintance of mine, Nick Carr, wrote an article for the Atlantic in 2007 titled "Is Google Making Us Stupid?" and it was a hit. If Colligo was launched in 2007, we'd all be fretting away about Google search. Do you see the point? It's like seeing a Model T and saying "I like horse shit and slower unpredictable travel>' LLMs are here. What we need to do is understand how to use them. The tech is mindless--but like Google search it assists us humans. Dig?
Thank you for the clarification, Erik, much appreciated! I think I see your point now. A lot of these words, like "intelligence" or "emergence", are so overloaded nowadays, that I wanted to be sure I don't just assume what you mean by them (as a software engineer, I keep being fascinated by how many misunderstandings happen because people think they're on the same page without actually clarifying they really are).
As your reader for almost a year now, I think it would be great if you could make it more distinct whether a piece is written from a philosophical standpoint or another one ("pragmatic", maybe?). I initially read this piece through the usual philosophical prism and didn't know what to think of it (hence the comments :)).
And please do not completely abandon the philosophical aspect of Colligo! :) It's what made me fall in love with Colligo immediately when I found it, as your insights were always very enriching and it was so much more than "just" the usual techie stuff I read elsewhere. I think your background in both CS and philosophy is what makes you uniquelly positioned for a very interesting perspective on things, few other people provide today.
Peace, Ondřej
Well, you're my best reader and best critic! I apologize--I use LLMs to write, but I don't them write for me. It's like some tribe---there was a comic movie about this--in Africa that discovers a crushed aluminum can of Coca-Cola. It becomes there best spear tips. We are always grasping for what works, but integrated idea is that I can USE LLMs to more effectively communicate my version of humanism. Dig? I mean. Ondřej, we are swept up in a tech revolution that really started in England in the early 1900s. How are we to survive? You can't fire assholes like Musk or Sam Altman (I have no idea, by the way, if they are assholes). The tech actually works. The question we need to ask ourselves is: what does the smart critic say? I'd be happy to live on a ranch in Montana with WiFi and basic apps. But that's just a pipe dream. What does the smart critic say? Mayb we write about existentialism and Sartre's literature about Vichy France. I don't know. Every generation has to ask themselves the question: what are we against, and why? And how do we push it out to people? Dig?
Hello Erik! TBH I’m not quite sure how to read this piece, I find the multiple colons difficult to follow (who is “you”, also at what point ChatGPT is saying something in a new paragraph starting with something like “compelling protagonist” and a colon again - is that still what ChatGPT said…?) Some different indentation would possibly help, although I’m not sure if Substack offers that (I’m reading on mobile now, maybe desktop displays it better?)
Absolutely, I had been up forever let me get some rest and then I’ll indent and format.
Thank you for sharing a wonderful and personal dialogue. It was throughly enjoyable. I, too, am trying to use GPT in a similar manner. For writing a piece of prose or fiction, I have found the Canvas feature indispensable.
Thank you, James. Much appreciated.
Yeah, I just deleted it. I was up for days, sorry about that. :-) I just finished about 90,000 words on a new project.
Was this a thread that got partly deleted maybe?