Our Computational World is Also Boring and Simplistic
Why are we replacing history and culture and philosophy with surface level gadget-talk?
Hi everyone,
I hope this isn’t too heavy for the holiday season. Let me jump in: one aspect of our high-tech 21st century culture is that it’s not neutral about other ways of living and viewing life. “AI” and the rest (smart phones, etc.) represents a set of commitments that amount to a philosophical position. Sure, tech solves problems (duh). But in the last decade we’ve ended up with much more than that. We’ve ended up signing up for or simply accepting without challenge a set of principles that—as far as I can tell—have a lot to do with sci-fi fantasies and not much to do with human beings and human progress. Yes I can use my GPS to find the hotel. That’s the good part. The bad part is the culture that’s resulting. It’s a bad situation. It won’t correct itself. So, we have to attempt to correct it.
We’ve inherited—and made—a simplistic world, which we insist on seeing as deep and innovative and meaningful. Some of it is. But much is a dodge, wishful thinking, a refusal to learn from history, and worse. It’s very dangerous to have wrong ideas and not recognize them as such. I hope some of the below proves thought provoking.
After the holidays I plan on writing a longish piece on humanism and what we mean or should mean by it, and how we might start getting to it. Happy Holidays and as always, I appreciate the comments and support! That’s how I can keep this going.
Erik J. Larson
AI as Technological Kitsch
In 1980, the Czech-born writer Milan Kundera wrote his masterpiece, The Unbearable Lightness of Being. The novel is a love story, set against the backdrop of the Soviet invasion of then Czechoslovakia by the Soviet Union in 1968. Kundera wrote about the writers and artists who committed suicide after relentless, mendacious harassment by the Soviet secret police who had inserted themselves into the social, intellectual, and cultural fabric of Prague. Dead and discredited, the Prague intellectuals then received further (though posthumous) disgrace: disgusting encomiums at their funerals, where Soviet party members and officials would attest to the deceased’s lifelong devotion to the State. Soviet propaganda drove them to death; the same propaganda then portrayed their lives as nobly sacrificed to advance ideas that they had in fact spoken out against publicly and privately. What they hated, they were described as loving.
The Soviet propaganda was ruthless, but it was not wrathful and stupid. It had a particular purpose. That purpose was to purge the country of deeper and more profound (and contrary) expressions of the meaning of a country, a people, and a life. The Soviets were purging Prague, and all of Czechoslovakia, of its shared history, its traditions, and its sense of what was valuable and worth fighting for. Once the free-thinkers were silenced, the Soviets would then, like painting a wall after first sandblasting it, be free to impose their worldview without serious or organized opposition. Kundera’s story is a trenchant and often tragic account of the value of human life, and how particular beliefs and ideologies can attempt—but never quite manage—to obfuscate and gloss over all that is meaningful to an individual and to a society. Kundera called the Soviet culture foisted upon the defeated Czech people kitsch.
Technological Kitsch
Kitsch is a German word that, while it usually refers today to cheesy or tacky artwork and decor, originally meant exaggerated sentimentality and melodrama in any realm. The intelligence errors at the heart of the AI worldview—the beliefs, that is, not the science—have given rise to a modern and particularly pernicious form of kitsch. Dreams of superintelligent computers are not Soviet propaganda, and no one is coercing us to believe in the rise of the machines. But they share the basic idea of replacing complex and difficult discussions about individuals and societies with technological stories that, like Soviet culture, rewrite older ideas with dangerously one-dimensional abstractions.
Kitsch is a word whose meaning and use have changed over time. The original German definition in some ways differs from the meaning I intend to explore here, but two essential ingredients of the original meaning should make my claim clear enough. First, kitsch involves a simplification of complicated ideas. There must be a simple story to tell. Second, it offers easy solutions that sweep away, with emotion, the questions and confusions people have about the problems of life rather than addressing those questions with serious, probing discussion. Thus, a perfect example of kitsch is the dreamy idea that one day an awe-inspiring android with superintelligence will remake human society and its older traditions and ideas, and we’ll enter a new era, thankfully free of old arguments about God, mind, freedom, the good life, and the like. Beautiful machines (or machines with beautiful intelligence) like “Ava” in the 2015 sci-fi film Ex Machina, portrayed by Alicia Vikander, will remove the hard facts of human existence. This simplified world is kitsch, technological-style. Like Soviet propaganda, it might horrify or mollify, but it gives us a new story that writes over and makes unnecessary what was true before, and the old reality disappears.
Alan Turing, for all his contributions to science and engineering, made possible the genesis and viral growth of technological kitsch by first equating intelligence with problem-solving. Jack Good later compounded Turing’s intelligence error with his much-discussed notion of ultraintelligence, proposing that the arrival of intelligent machines necessarily implied the arrival of superintelligent machines. Once the popular imagination accepted the idea of superintelligent machines, the rewriting of human purpose, meaning, and history could be told within the parameters of computation and technology. But ultraintelligent machines are fanciful, and pretending otherwise encourages the unwanted creep of technological kitsch, usually in one of two ways that are equally superficial.
At one extreme we hear a tale of apocalyptic or fearsome AI, a sort of campfire horror story. At the other extreme we encounter utopian or dreamy AI, which is equally superficial and unmerited. If we take either form of AI’s kitsch seriously, we end up in a world defined only by technology. This is a theme I will be returning to, because it exposes the core problem with futuristic AI. As Nathan, the genius computer scientist in Ex Machina puts it, “One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.” In truth, it’s unclear that any computer will ever look back at all. The popular sentiment requires a deep dive into the meaning of existence, life, consciousness, and intelligence, and the differences between ourselves and computation and its many technologies. Kitsch prevents us from grappling with human nature and other serious philosophical endeavors. This simply shouldn’t be the case, as Kundera knew all too well. Kitsch has its roots, typically, in a larger system of thought. For the communists, it was Marxism. With the inevitability myth, it’s technoscience. We inherited the technoscientific worldview most directly from the work of Auguste Comte.
In future posts I’ll try to unpack Comte beyond what I said in the book, and flesh out what humanism in the 21st century might look like. Big task, as ideas have really shifted in the last two decades. Looking forward to it.
Erik J. Larson
The idea that intelligence has to do with problem-solving (and the imagining and planning that comes with it) isn't necessarily wrong, I think. What is wrong to equate 'problems' with '*logical* (i.e. discrete) puzzles'.
We humans are pretty poor at discrete logic, so we have for ages equated being smart with being good at logical reasoning. In reality — as Andy Clark has said — we're better at frisbee than at logic. Navigating a difficult, not yet known situation (both physically and socially) is the forte of our intelligence. We have taken solving logical puzzles as the measure not because we are good at it, but because we are bad at it and thus find it difficult. This is not that weird, as we are bad at it in an absolute sense, but relatively we are the species on this planet that is best at it, and that little bit of skill has made a difference (next to having relatively large brains). It has given us the reliability (and time- and spacelessness) that comes with discreteness. So, being able to do logic is a real bonus for our intelligence.
The arrival of the perfect logic machine (the Turing machine) was seen as the logical step towards superintelligence. The current neural net AIs use a different kind of mechanism, namely (analog) 'weights'. These are supposed to be not discrete (they are 'real' numbers), but as long as we approximate them with (discrete, logical) integers on Turing machines, we are fooling ourselves technically in roughly the same way that LLMs fool us by approximating meaning with token-order-predictions.
Where in humans the logic arises out of analog technology (as it does in digital processors, a transistor is a very analog thing), with digital computers we try to let analog (and even chaotic) behaviour arise out of massive amounts of discrete logic. That is fundamentally a doomed route, however if we accept the enormous inefficiency, we can push that envelope and create some useful tools. But AGI on digital technology, no way.
Note that Google has already partly given up on using 'floats', their latest Gemini uses a data type called 'int8' instead of float32, float16, or bfloat16. This probably enables them to have many more (but far less precise) parameters. In the end, the expressive power of these models is simply the number of bits of all the parameters (not the number of parameters). Many researchers assume that you need only a limited amount of bits, but that assumption is based on regularity of the analog signals, and one can seriously doubt the validity of that assumption in real biological systems.
Nice article, though. Kundera's book was indeed a wonderful read.
Erik, great article and I also enjoy your readers’ comments. I have no technical knowledge of AI, but lots of questions and concerns about--for lack of a better phrase--the human side of the equation. It’s what attracted me to your Substack.
I do retain a grasp of high school statistics, and AI seems to me to just be a super-hyped statistical model, and all models just say what they are told to say. Put me in the skeptical camp.
I’ve learned a lot in the few months I’ve been a subscriber, and look forward to your upcoming essays. My primary concern is that the AI drivers and true believers are so focused on the cool things AI can do that they have lost perspective on its impact on people and society as whole. The memoirs of the Columbia University biochemist, Erwin Chargaff, is one of the best critiques of this failure of science in general I have ever read. Chargaff was also one helluva writer, but his integrity and aloofness cost him dearly in academia. (Didn’t help that he basically accused Watson and Crick of taking credit for his groundbreaking work on the DNA double helix.)
I wish you peace and goodwill in the coming year.