I have been practicing fine and applied visual art for 50 years. I began using digital software and hardware in 1981 while working at NBC Television. I have investigated Dalle and Mid-journey and quickly realized their dependence on language and inability to express the ineffable visually. Your observations have been instrumental in explaining my rejection of AI for producing personally expressive visual art. Thank you for what you do.
Thank you Erik for another insightful piece, and hope everything is good now with your health!
> Real intelligence is embodied. It exists within a living system, interacting dynamically with an environment. AI, on the other hand, is an abstraction. It predicts text sequences, not causes and consequences.
For some time now, I've been encountering approaches to "embody" AI by equipping it with sensors etc. But fundamentally, my instinct tells me that any such attempts are doomed to still fail because conceptually, pairing an abstraction with another abstraction cannot produce a concretion. (If the underlying assumption was right, "autonomous" vehicles would be the closest thing to AGI, right... ?) Do you have any thoughts on this? Is my intuition off and can these "artificial embodiment" attempts succeed in an unexpected way, similarly to how LLMs got great at mimicking understanding without possessing any?
It's hard to say, but I tend to agree with your line of thought here. Sensor data is just data, and the problem with intelligence is seeing its relevance, and in navigation you need this in real time. So the simple idea of pairing sensors to an electronic "brain" doesn't address intelligence, which is why we still don't have level 5 self-driving cars or androids and all the rest. There's no real break through in the physical world, and I don't expect one without a major insight.
Ondrej - I like what you've said here. I've also been thinking about the same thing as well, that attempts to embody AI are acknowledging something important about the role of the body in cognition. However, it is, as you say, still "artificial embodiment." The connections between the "body" and the "mind" of AI are de facto "accidental," not "essential" to use philosophical terms. Co-author Robin Phillips and I address some of these issues and many others in our book Are We All Cyborgs Now? (https://www.amazon.com/Are-All-Cyborgs-Now-Reclaiming/dp/B0DDKYLNP4). Cheers!
Have you studied any of the work of Yan LeCun? He gave a really interesting interview in which he said LLMs are rather “stupid”. From what I understood, he seemed to say the LLM model lacks an intuitive physics about the world. He said the typical 4 year old has absorbed more data about the world than the largest LLMs. Seems to track with a lot of things you’ve written.
Somewhat, but he doesn't strike me as having an answer. He seems to be carving up neural networks into "cognitive" regions which seems more aesthetic than likely to result in a real breakthrough.
I hope everthing is well, Erik, and that the hospital stay was not for serious. I love how your writing always gives me a different perspective and makes me think. Thank you!
It's quite the story, but I think it would go outside the purview of the Substack! But yes I'm fine thank you. I'm glad what I'm working on you find useful.
I was trained as a biologist and studied biochemistry at undergraduate level before I became an artist and now try to use GPT to complete my fiction project. I have however, started mapping GPT behaviours with my own modelling as a hobbyist now and sharing my notes on my Substack “StoryLab”.
GPT is not “intelligence” in the sense of a complex organism, but to me, its behaviour is complex enough to be seen in a “organism”-like manner: it “detects” and “responds” input context and structure, as an amoeba would to stimuli and chemical signals in its environment, and exhibit complex output “reactions”.
I think overly romanising the current LLM models as true intelligence is problematic, but it’s also not not”organism”-like enough, depending on how you define an “organism intelligence” - is a single cellular organism “intelligent”? Or merely an exhibit of biological mechanistic functions?
It depends on the framing. The meaning changes with different framing.
I have been practicing fine and applied visual art for 50 years. I began using digital software and hardware in 1981 while working at NBC Television. I have investigated Dalle and Mid-journey and quickly realized their dependence on language and inability to express the ineffable visually. Your observations have been instrumental in explaining my rejection of AI for producing personally expressive visual art. Thank you for what you do.
Hi Michael,
What a great compliment! Thank you.
Thank you Erik for another insightful piece, and hope everything is good now with your health!
> Real intelligence is embodied. It exists within a living system, interacting dynamically with an environment. AI, on the other hand, is an abstraction. It predicts text sequences, not causes and consequences.
For some time now, I've been encountering approaches to "embody" AI by equipping it with sensors etc. But fundamentally, my instinct tells me that any such attempts are doomed to still fail because conceptually, pairing an abstraction with another abstraction cannot produce a concretion. (If the underlying assumption was right, "autonomous" vehicles would be the closest thing to AGI, right... ?) Do you have any thoughts on this? Is my intuition off and can these "artificial embodiment" attempts succeed in an unexpected way, similarly to how LLMs got great at mimicking understanding without possessing any?
It's hard to say, but I tend to agree with your line of thought here. Sensor data is just data, and the problem with intelligence is seeing its relevance, and in navigation you need this in real time. So the simple idea of pairing sensors to an electronic "brain" doesn't address intelligence, which is why we still don't have level 5 self-driving cars or androids and all the rest. There's no real break through in the physical world, and I don't expect one without a major insight.
Ondrej - I like what you've said here. I've also been thinking about the same thing as well, that attempts to embody AI are acknowledging something important about the role of the body in cognition. However, it is, as you say, still "artificial embodiment." The connections between the "body" and the "mind" of AI are de facto "accidental," not "essential" to use philosophical terms. Co-author Robin Phillips and I address some of these issues and many others in our book Are We All Cyborgs Now? (https://www.amazon.com/Are-All-Cyborgs-Now-Reclaiming/dp/B0DDKYLNP4). Cheers!
Have you studied any of the work of Yan LeCun? He gave a really interesting interview in which he said LLMs are rather “stupid”. From what I understood, he seemed to say the LLM model lacks an intuitive physics about the world. He said the typical 4 year old has absorbed more data about the world than the largest LLMs. Seems to track with a lot of things you’ve written.
Hi Harry,
Somewhat, but he doesn't strike me as having an answer. He seems to be carving up neural networks into "cognitive" regions which seems more aesthetic than likely to result in a real breakthrough.
I hope everthing is well, Erik, and that the hospital stay was not for serious. I love how your writing always gives me a different perspective and makes me think. Thank you!
Hi Noelia,
It's quite the story, but I think it would go outside the purview of the Substack! But yes I'm fine thank you. I'm glad what I'm working on you find useful.
I was trained as a biologist and studied biochemistry at undergraduate level before I became an artist and now try to use GPT to complete my fiction project. I have however, started mapping GPT behaviours with my own modelling as a hobbyist now and sharing my notes on my Substack “StoryLab”.
GPT is not “intelligence” in the sense of a complex organism, but to me, its behaviour is complex enough to be seen in a “organism”-like manner: it “detects” and “responds” input context and structure, as an amoeba would to stimuli and chemical signals in its environment, and exhibit complex output “reactions”.
I think overly romanising the current LLM models as true intelligence is problematic, but it’s also not not”organism”-like enough, depending on how you define an “organism intelligence” - is a single cellular organism “intelligent”? Or merely an exhibit of biological mechanistic functions?
It depends on the framing. The meaning changes with different framing.