Present Shock
Digital technology doesn't have to overwhelm us. But without ideas and effort it will.
In 1970, the American writer and futurist Alvin Toffler insisted the western world was suffering from “future shock,” the challenge of the times, too much change, too radical a kind, too fast for our social brains. He tapped a nerve: an information technology revolution (Intel’s microchip, the basis of the modern computer, debuted in 1971) was underway. Today the “IT revolution” is old hat, and future shock has morphed into what author and documentarian Douglas Rushkoff once called “present shock.” As the subtitle of his 2014 book puts it, present shock happens when everything happens now. The common thread here is our hyper-technological consumerist society that happily ignores lessons of the past and dismisses history itself as a compendium of folly and evil—or just downright boring.
If you’re on the web and have a pulse, you’ve no doubt notice that things happen quickly. Memes, debates, topics, gripes, cancellations appear seemingly out of thin air, and responses and commentary follow almost immediately. Rushkoff is right; it’s a kind of “present shock.” We wake up to one world and go to sleep in another. But the idea that the world is changing too fast is juxtaposed by another that sings a different song. Scientific discovery and innovation isn’t on some exponential curve. Tweets are. And that’s a big difference. We see a rapidly changing landscape and assume we’re solving cold fusion or making flying cars as well. We’re not. Few stop to wonder if all that “change” is a lot of mindless gossipy chitchat. Might we be “exponentially” changing into a shallow and confused society? Seems a defensible position. For that matter, seems a worry.
The celebrated early humanist Petrarch’s recovery of the lost letters of Cicero marks a watershed moment in European history. Here a new world—the early Renaissance—was birthed and supplanted and improved by study of the old. It was as if humanity wanted to find and nurture the best of itself, so discovery of the greats of the classic Roman Greek world and innovating for the future joined forces. But our culture today seems unconcerned and even dismissive about humans and their potential. Studying the past isn’t some valiant pursuit. Studying ourselves in a positive light seems like signing on to study silly error-prone organisms with bias. What a drag. This sort pf self-flagellation would make little sense in a healthy humanistic world, but our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking. If we, after all, are error prone, sort of stupid, and eminently biased, the quest for superior AI not only makes sense, but it also seems like a moral imperative. And. Here we are.
Big Data, data-driven AI, data analysis and the like are clearly important as means to business or scientific ends, but it’s downright bizarre to view them as a replacement for human ingenuity and possibility.
But here we are. In present shock for sure.
"If we, after all, are error prone, sort of stupid, and eminently biased, the quest for superior AI not only makes sense, but it also seems like a moral imperative. And. Here we are."
As you bring out clearly, this is the argument implicitly made by too many varieties of AI-optimist. There are several ways of rebutting it. An obvious way is to argue against its (stated) premise. Maybe humans aren't that error prone, stupid, biased, etc. Or if we are, maybe it's only sometimes, and that's okay — best to make our own mistakes and live with the consequences than to erode our very ability to make mistakes.
Another approach, though oblique, is to examine the desirability of those things with which AI is supposedly in competition with humans. Let's grant, for the sake of argument, that AI is less error prone, less stupid, less biased, etc. Let's grant too that AI can produce some superior artifacts — digital art, e.g. And let's grant that AI is better at brute statistical inference and other forms of induction. The question to ask is: So what?
I don't mean that we should deny the desirability of being able to achieve various cognitive tasks like playing chess, calculating statistical probabilities, being less vulnerable to some human biases, and so on. I mean that we should question the overriding desirability of such things, the monomaniacal pursuit of their perfection, the obsessive focus on them.
What other cognitive ends might there be? Consider the use of computers in mathematical proof. Mathematicians have begun using computers, with their superhuman computational capacity, to prove certain theorems. What the computers "prove," however, is merely THAT a theorem follows. They do not show us WHY a theorem follows. The knowledge that the theorem follows is thus something we are essentially taking on testimony, without understanding how it is that the theorem follows. It might be useful to know that a theorem follows. But it's much more useful, not so say more deeply pleasurable, to understand why it does.
There's more to cognitive life than knowing facts, calculating probabilities, making strategic moves, and any of the other "cognitive tasks" that the tech-optimists exalt.
"But our culture today seems unconcerned and even dismissive about humans and their potential. Studying the past isn’t some valiant pursuit. Studying ourselves in a positive light seems like signing on to study silly error-prone organisms with bias. What a drag. This sort pf self-flagellation would make little sense in a healthy humanistic world, but our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking."
It seems to me that both the ones that believe "AGI is Nigh, rejoice/be afraid" and the ones saying "AGI is far away" actually *share* the convictions that humans are smart (it is just that they rejoice/fear about machines being as smart or even smarter).
Personally, I gather neither optimism or pessimism about the abilities of humans is ultimately best, but realism is, including realism about the workings of our intelligence. However, I doubt if that realism is attainable given those workings. https://ea.rna.nl/2021/11/02/masterorservant-dadd2021/