Nice post. I would also add that this “Tech nonsense” is based on at least two premises. First, that the mind is supposed to be computational, ignoring all the arguments to the contrary. Second, it always implicitly posits the mind-brain identity as an indisputable given, something however that, no matter what neuroscientists and philosophers of mind tell us, is far from obvious.
You are right, a lot of nonsense is generated by weak definitions of intelligence. There is a growing problem with definitions in the conversations of "tech bros" (and opinion leaders in general) that stems from their lack of basic philosophical education. Especially with AI, it's baffling how we predict all sorts of things without having a clear definition of "intelligence".
Any conversation about AI should start with the authors' accepted definition of intelligence. Saying "the term is impossible to define" and then predicting "intelligent machines will evolve to eliminate us" is just silly.
(The problem is systemic, not limited to technology, “democracy” is just getting some completely new definitions. And so is "freedom")
One thing about Turing's approach to computability (algorithmicity, calculability) that tends to be completely overlooked:
He looked at what human calculators do — you know, those humans in ye olden tymes who would calculate with pencil and paper. He noticed that what they do is a reduced, simplified, rarified activity: something achieved only by abstracting from — and something only making sense against the background of — human intelligence in its fullness. He thought, "Hey, I can take this abstracted, comparatively mindless activity and embody it in a machine. That's kinda cool."
He did NOT think, "Hey, this attenuated activity is the paradigm of, or the model for, intelligence."
Gödel’s first incompleteness theorem also sprang to mind as I read this (excellent) essay, particularly in the discussion of computers designing ever more complex computers.
Maybe the tech bros’ greatest achievement was to take lazy thinking and sell it to the public as genius (admittedly building on tangible initial achievements).
And maybe, a fortiori, the take home is that intelligence, suitably incentivised, tends in the long run towards hucksterism and lazy BS.
Nice post. I would also add that this “Tech nonsense” is based on at least two premises. First, that the mind is supposed to be computational, ignoring all the arguments to the contrary. Second, it always implicitly posits the mind-brain identity as an indisputable given, something however that, no matter what neuroscientists and philosophers of mind tell us, is far from obvious.
"Tech bros, c’mon. Get the self-driving cars working before lecturing the world about solving all its problems."
Yeah!
Promises, forecasts, and models are not reality.
Where's the beef, boys?
You are right, a lot of nonsense is generated by weak definitions of intelligence. There is a growing problem with definitions in the conversations of "tech bros" (and opinion leaders in general) that stems from their lack of basic philosophical education. Especially with AI, it's baffling how we predict all sorts of things without having a clear definition of "intelligence".
Any conversation about AI should start with the authors' accepted definition of intelligence. Saying "the term is impossible to define" and then predicting "intelligent machines will evolve to eliminate us" is just silly.
(The problem is systemic, not limited to technology, “democracy” is just getting some completely new definitions. And so is "freedom")
Hi Eric,
Excellent point.
Thank you!
Loving what you're doing here. Keep up the interesting work!
One thing about Turing's approach to computability (algorithmicity, calculability) that tends to be completely overlooked:
He looked at what human calculators do — you know, those humans in ye olden tymes who would calculate with pencil and paper. He noticed that what they do is a reduced, simplified, rarified activity: something achieved only by abstracting from — and something only making sense against the background of — human intelligence in its fullness. He thought, "Hey, I can take this abstracted, comparatively mindless activity and embody it in a machine. That's kinda cool."
He did NOT think, "Hey, this attenuated activity is the paradigm of, or the model for, intelligence."
Gödel’s first incompleteness theorem also sprang to mind as I read this (excellent) essay, particularly in the discussion of computers designing ever more complex computers.
Maybe the tech bros’ greatest achievement was to take lazy thinking and sell it to the public as genius (admittedly building on tangible initial achievements).
And maybe, a fortiori, the take home is that intelligence, suitably incentivised, tends in the long run towards hucksterism and lazy BS.