Nice work, again. Happy to see some content on AI that speaks frankly to at least one side of the topic -- the oft-apocalyptic handwringing about Terminator-esque flights of fancy regarding the potential future of such systems.
You make good points about the mythologizing and culture behind AI and offer some useful history. Thank you.
I write software for a living. Computers are dumb beasts -- which is actually an insult to "dumb" beasts but gets to the point. I've looked into the algorithms behind AI and it's really just advanced pattern matching applied to huge datasets that create pastiches that appear intelligent because it can "answer" questions and string words together that make more or less coherent sentences. AI doesn't understand the words it spouts, which is why it makes all sorts of nonsensical mistakes and makes up facts all the time. It doesn't know what a "fact" is, it just sort of appears to. It doesn't "know" anything. It will never be sentient.
What we're dealing with here is a bunch of oldster spergs that:
1. don't have that much human feeling to begin with and so are easily fooled by human-seeming behavior
2. have a vested interest, both pecuniary and prestige-based, in the AI singularity
3. are too terrified of death to deal with it in a healthy way so they imagine a "singularity" in which they can "download their consciousness" into a computer and "live" forever.
Forgive all the scare quotes but they are necessary since all this is so ersatz.
Hi Peebo, thanks for your comment. I'm in sympathy! I wrote software for a living as well, and quite familiar with the "dumb beast" phenomenon. I always enjoyed building systems, but as a human-centered engineering pursuit, something that provided a human challenge and (you hope) a satisfying and useful result. By the way, love the word "ersatz"! Haven't read that one in a while.
Nobody really believes the machines will become alive. That's a trick to distract from the real crimes, stealing art and craft and skill, destroying jobs.
The other fake threat is "What if AI takes control of nuclear weapons?" Predictive big-data software has been controlling nuclear weapons since the 1980s, and it hasn't taken over the world yet.
I’m a little confused by your definitions here Erik. I think I understand that you’re challenging the overly-optimistic (or even pessimistic) ideas of what AI _can_ or _will_ do. Understandable from a pragmatic point of view — the future is unknowable as you said. However, it seems that you’re avoiding a necessary definition for what you mean by “Human Intelligence.” You frequently allude to examples such as “driving a car,” but what feature of human intellect is it that you’re claiming AI does not possess? You said that ChatGPT only "generate(s) word sequences culled from human-language on the web." What difference does this make? The concept of "thinking" is entirely abstract is it not? It has only been attributed to *human* consciousness because _that_ is historically the definition of what thinking means.
In Computing Machinery, and Intelligence, did Turing not remove the need for definitions of "thinking" and "machines" from the problem of "can computers think?" Before Turing, wasn't the question simply dead because no objective definition even existed? By removing the need to define "thinking" and "computers" and simply by asking "what if a computer can convince you that it is a human" does this not give us all we need? And forget the Chinese Room experiment, because it only explained that the "machine" can be more than a single organism, but still remains a "machine" that is capable of replacing the human thought in many different ways.
Since this is the very tip of this iceberg I can sympathize with the belief that overreacting is probably a media problem. However, I think that under-reacting is likely more dangerous. LLMs have changed the AI game from "regurgitation" to novelty -- novelty in the sense that the generated responses aren't necessarily new ideas, but they're certainly new constructions of words. The results aren't just copied sentences from the web.
Erik, reading your book too, you seem very prepared on philosophers and modern history of philosophical thought. If you are interested in writing a piece or pointing me in other directions, I’d love to read more about the philosophies that, in the name of “humanism,” end up thinking very little about human beings and their intelligence, and overvalue computers. E.g., Harari describes modern technology as “god-like,” which I find ridiculous.
Yes, Harari is quite popular and I've read his books, but frankly I've never found his philosophical roots to go very deep (in fairness, he's a historian). THAT technology is changing the world is hardly debatable, but there's a cacophony of voices about HOW, and I think Harari's breezy futurism is not particularly helpful and becoming all too common. It's a bit cliche, but you probably couldn't do much better than Huxley's Brave New World to expose scientism and its strange alliance with pseudo-humanism. For a more modern take, I still love Jaron Lanier's You Are Not a Gadget, and pretty much anything Nick Carr writes will also expose techno-folly. Thanks for your comment!
Yes, Huxley’s BNW does a great job in that direction. I will certainly read the other authors you pointed out. Thank you! And thanks for your work, the world needs to hear other ideas and more reasonable (and humbler) stances on tech. I’ll be joining this journey, too, as soon as I find again some time for writing.
I can respect the description of technology as "god-like" depending on the context. It is certainly omnipresent and it is frequently treated as omniscient. In many ways it is treated as an objective source of "unbiased" authority but this is all because humans are religious by nature. We're going to submit to something and technology seems to be the thing we're worshiping now. We gave up map reading for pre-calculated GPS and in doing so many have lost the ability to navigate our own neighborhoods, let alone across the country.
Nice work, again. Happy to see some content on AI that speaks frankly to at least one side of the topic -- the oft-apocalyptic handwringing about Terminator-esque flights of fancy regarding the potential future of such systems.
We need "futurists" like we need witch doctors and Ouija boards.
According to past "futurists," right now we're all driving flying cars, and vacationing on Mars.
Thanks for the quick shot of reality.
You make good points about the mythologizing and culture behind AI and offer some useful history. Thank you.
I write software for a living. Computers are dumb beasts -- which is actually an insult to "dumb" beasts but gets to the point. I've looked into the algorithms behind AI and it's really just advanced pattern matching applied to huge datasets that create pastiches that appear intelligent because it can "answer" questions and string words together that make more or less coherent sentences. AI doesn't understand the words it spouts, which is why it makes all sorts of nonsensical mistakes and makes up facts all the time. It doesn't know what a "fact" is, it just sort of appears to. It doesn't "know" anything. It will never be sentient.
What we're dealing with here is a bunch of oldster spergs that:
1. don't have that much human feeling to begin with and so are easily fooled by human-seeming behavior
2. have a vested interest, both pecuniary and prestige-based, in the AI singularity
3. are too terrified of death to deal with it in a healthy way so they imagine a "singularity" in which they can "download their consciousness" into a computer and "live" forever.
Forgive all the scare quotes but they are necessary since all this is so ersatz.
Hi Peebo, thanks for your comment. I'm in sympathy! I wrote software for a living as well, and quite familiar with the "dumb beast" phenomenon. I always enjoyed building systems, but as a human-centered engineering pursuit, something that provided a human challenge and (you hope) a satisfying and useful result. By the way, love the word "ersatz"! Haven't read that one in a while.
Nobody really believes the machines will become alive. That's a trick to distract from the real crimes, stealing art and craft and skill, destroying jobs.
The other fake threat is "What if AI takes control of nuclear weapons?" Predictive big-data software has been controlling nuclear weapons since the 1980s, and it hasn't taken over the world yet.
Good points!
I’m a little confused by your definitions here Erik. I think I understand that you’re challenging the overly-optimistic (or even pessimistic) ideas of what AI _can_ or _will_ do. Understandable from a pragmatic point of view — the future is unknowable as you said. However, it seems that you’re avoiding a necessary definition for what you mean by “Human Intelligence.” You frequently allude to examples such as “driving a car,” but what feature of human intellect is it that you’re claiming AI does not possess? You said that ChatGPT only "generate(s) word sequences culled from human-language on the web." What difference does this make? The concept of "thinking" is entirely abstract is it not? It has only been attributed to *human* consciousness because _that_ is historically the definition of what thinking means.
In Computing Machinery, and Intelligence, did Turing not remove the need for definitions of "thinking" and "machines" from the problem of "can computers think?" Before Turing, wasn't the question simply dead because no objective definition even existed? By removing the need to define "thinking" and "computers" and simply by asking "what if a computer can convince you that it is a human" does this not give us all we need? And forget the Chinese Room experiment, because it only explained that the "machine" can be more than a single organism, but still remains a "machine" that is capable of replacing the human thought in many different ways.
Since this is the very tip of this iceberg I can sympathize with the belief that overreacting is probably a media problem. However, I think that under-reacting is likely more dangerous. LLMs have changed the AI game from "regurgitation" to novelty -- novelty in the sense that the generated responses aren't necessarily new ideas, but they're certainly new constructions of words. The results aren't just copied sentences from the web.
Erik, reading your book too, you seem very prepared on philosophers and modern history of philosophical thought. If you are interested in writing a piece or pointing me in other directions, I’d love to read more about the philosophies that, in the name of “humanism,” end up thinking very little about human beings and their intelligence, and overvalue computers. E.g., Harari describes modern technology as “god-like,” which I find ridiculous.
Hi Alberto,
Yes, Harari is quite popular and I've read his books, but frankly I've never found his philosophical roots to go very deep (in fairness, he's a historian). THAT technology is changing the world is hardly debatable, but there's a cacophony of voices about HOW, and I think Harari's breezy futurism is not particularly helpful and becoming all too common. It's a bit cliche, but you probably couldn't do much better than Huxley's Brave New World to expose scientism and its strange alliance with pseudo-humanism. For a more modern take, I still love Jaron Lanier's You Are Not a Gadget, and pretty much anything Nick Carr writes will also expose techno-folly. Thanks for your comment!
Yes, Huxley’s BNW does a great job in that direction. I will certainly read the other authors you pointed out. Thank you! And thanks for your work, the world needs to hear other ideas and more reasonable (and humbler) stances on tech. I’ll be joining this journey, too, as soon as I find again some time for writing.
I can respect the description of technology as "god-like" depending on the context. It is certainly omnipresent and it is frequently treated as omniscient. In many ways it is treated as an objective source of "unbiased" authority but this is all because humans are religious by nature. We're going to submit to something and technology seems to be the thing we're worshiping now. We gave up map reading for pre-calculated GPS and in doing so many have lost the ability to navigate our own neighborhoods, let alone across the country.