"If we, after all, are error prone, sort of stupid, and eminently biased, the quest for superior AI not only makes sense, but it also seems like a moral imperative. And. Here we are."
As you bring out clearly, this is the argument implicitly made by too many varieties of AI-optimist. There are several ways of rebutting it. An obvious way is to argue against its (stated) premise. Maybe humans aren't that error prone, stupid, biased, etc. Or if we are, maybe it's only sometimes, and that's okay — best to make our own mistakes and live with the consequences than to erode our very ability to make mistakes.
Another approach, though oblique, is to examine the desirability of those things with which AI is supposedly in competition with humans. Let's grant, for the sake of argument, that AI is less error prone, less stupid, less biased, etc. Let's grant too that AI can produce some superior artifacts — digital art, e.g. And let's grant that AI is better at brute statistical inference and other forms of induction. The question to ask is: So what?
I don't mean that we should deny the desirability of being able to achieve various cognitive tasks like playing chess, calculating statistical probabilities, being less vulnerable to some human biases, and so on. I mean that we should question the overriding desirability of such things, the monomaniacal pursuit of their perfection, the obsessive focus on them.
What other cognitive ends might there be? Consider the use of computers in mathematical proof. Mathematicians have begun using computers, with their superhuman computational capacity, to prove certain theorems. What the computers "prove," however, is merely THAT a theorem follows. They do not show us WHY a theorem follows. The knowledge that the theorem follows is thus something we are essentially taking on testimony, without understanding how it is that the theorem follows. It might be useful to know that a theorem follows. But it's much more useful, not so say more deeply pleasurable, to understand why it does.
There's more to cognitive life than knowing facts, calculating probabilities, making strategic moves, and any of the other "cognitive tasks" that the tech-optimists exalt.
Very much agree. "Ours is not to reason why; ours is but to do and die." But AI systems are just owned by companies with CEOs--we're really not "reasoning why" to THEM, not the systems.
Your distinction between THAT and WHY is quite apt. I would worry that this is not any possible path forward, and we'd better address it and attempt to change it while we can.
"But our culture today seems unconcerned and even dismissive about humans and their potential. Studying the past isn’t some valiant pursuit. Studying ourselves in a positive light seems like signing on to study silly error-prone organisms with bias. What a drag. This sort pf self-flagellation would make little sense in a healthy humanistic world, but our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking."
It seems to me that both the ones that believe "AGI is Nigh, rejoice/be afraid" and the ones saying "AGI is far away" actually *share* the convictions that humans are smart (it is just that they rejoice/fear about machines being as smart or even smarter).
Personally, I gather neither optimism or pessimism about the abilities of humans is ultimately best, but realism is, including realism about the workings of our intelligence. However, I doubt if that realism is attainable given those workings. https://ea.rna.nl/2021/11/02/masterorservant-dadd2021/
Yep. This is true, and I understand your point. Wouldn't it be better, though, all things considered, if we could change our circumstances by intelligent thought and will? We could then be happy in a better world we had helped make. Thanks for your comment. It's an important point to keep in mind.
"If we, after all, are error prone, sort of stupid, and eminently biased, the quest for superior AI not only makes sense, but it also seems like a moral imperative. And. Here we are."
As you bring out clearly, this is the argument implicitly made by too many varieties of AI-optimist. There are several ways of rebutting it. An obvious way is to argue against its (stated) premise. Maybe humans aren't that error prone, stupid, biased, etc. Or if we are, maybe it's only sometimes, and that's okay — best to make our own mistakes and live with the consequences than to erode our very ability to make mistakes.
Another approach, though oblique, is to examine the desirability of those things with which AI is supposedly in competition with humans. Let's grant, for the sake of argument, that AI is less error prone, less stupid, less biased, etc. Let's grant too that AI can produce some superior artifacts — digital art, e.g. And let's grant that AI is better at brute statistical inference and other forms of induction. The question to ask is: So what?
I don't mean that we should deny the desirability of being able to achieve various cognitive tasks like playing chess, calculating statistical probabilities, being less vulnerable to some human biases, and so on. I mean that we should question the overriding desirability of such things, the monomaniacal pursuit of their perfection, the obsessive focus on them.
What other cognitive ends might there be? Consider the use of computers in mathematical proof. Mathematicians have begun using computers, with their superhuman computational capacity, to prove certain theorems. What the computers "prove," however, is merely THAT a theorem follows. They do not show us WHY a theorem follows. The knowledge that the theorem follows is thus something we are essentially taking on testimony, without understanding how it is that the theorem follows. It might be useful to know that a theorem follows. But it's much more useful, not so say more deeply pleasurable, to understand why it does.
There's more to cognitive life than knowing facts, calculating probabilities, making strategic moves, and any of the other "cognitive tasks" that the tech-optimists exalt.
Hi Eric,
Very much agree. "Ours is not to reason why; ours is but to do and die." But AI systems are just owned by companies with CEOs--we're really not "reasoning why" to THEM, not the systems.
Your distinction between THAT and WHY is quite apt. I would worry that this is not any possible path forward, and we'd better address it and attempt to change it while we can.
Thanks, Eric! As always.
Erik J. Larson
erik - great words. keep writing your updates about our failure to understand why generative AI is a dead end.
"But our culture today seems unconcerned and even dismissive about humans and their potential. Studying the past isn’t some valiant pursuit. Studying ourselves in a positive light seems like signing on to study silly error-prone organisms with bias. What a drag. This sort pf self-flagellation would make little sense in a healthy humanistic world, but our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking."
It seems to me that both the ones that believe "AGI is Nigh, rejoice/be afraid" and the ones saying "AGI is far away" actually *share* the convictions that humans are smart (it is just that they rejoice/fear about machines being as smart or even smarter).
Personally, I gather neither optimism or pessimism about the abilities of humans is ultimately best, but realism is, including realism about the workings of our intelligence. However, I doubt if that realism is attainable given those workings. https://ea.rna.nl/2021/11/02/masterorservant-dadd2021/
Hi Gerben,
Your talk here is really good, and central to the present discussion: https://ea.rna.nl/2021/11/02/masterorservant-dadd2021/
I recommend everyone watch it! As usual you are getting under the surface to the core questions.
Thanks,
Erik
Thank you Erik.
“Be happy for this moment. This moment is your life.”
— Omar Khayyám (1048-1131)
Hi Tor,
Yep. This is true, and I understand your point. Wouldn't it be better, though, all things considered, if we could change our circumstances by intelligent thought and will? We could then be happy in a better world we had helped make. Thanks for your comment. It's an important point to keep in mind.