[
Hi everyone, here’s an after the fact addendum I’ve put up top, in hopes of clarifying what (I thought I was) saying. Here goes!
(1) AI drives the web. Most any site you can think of is driven by an algorithmic selection, which is data-driven AI. TikTok is pure algorithmic selection. Meta uses an algorithmic selection defined over a social network, whose nodes represent people (data points). Everything on the web is AI. LLMs are just more AI, and they too are becoming “everything.” The few “safe spaces” here are sites like Substack, so far, and podcast platforms—these so far are human content driven, at least primarily.
The primary problem with algorithmic content selection—a point I don’t really make in my original post below—is that it’s quintessential data-driven AI. “Data-driven” just means we have centralized nodes, or server farms, owned by a small number of rich people who claim legal ownership of your data (in most cases) and use it to make a profit through contextual advertising. “Data-driven AI” works much better than what we tried before. It’s also ruining the world. To continue:
(2) Since about 2012, what we call “AI” has been gaining power and “winning.” For over a decade now, AI has been solving problems it couldn’t solve before, becoming ubiquitous, widely used and adopted by most of the world, and especially the Western world. So, the fact of the inferential success of AI is another factor, in:
(3) Profoundly shifting intellectual discourse from principled philosophical discussion to fairly blatant political rancor and just flat out meanness. This politicization of our intellectual world has been selected for by (1) and (2). The success of AI means it’s awkward to lampoon it as happened before, circa late 20th century. And the fact that it selects for Tower of Babel discussions—literally, it selects for angered and polarized responses— means that feeling awkward about the success of a (to some) godless technology, and also constantly interacting with it means we’re more likely to end up somewhere in Politics Land rather than Objective Ideas-ville. Well that’s my thesis. This is my crazy little idea.
The tricky point here is me daring to mention that even my evangelical friends, who used to guffaw heartily about stupid AI systems and the vast chasm between man and machine are now dropping that line, essentially adopting AI technology as normal (how could they not—become the Unabomber?), and quite often shifting from disparaging godless materialism to disparaging godless Democrats. “AI” is still a punching bag, but not really in the “old way,” when it sort of didn’t work, and you never encountered it. Plus, there’s nothing we can really do about “AI”—give me your phone! Give it to me! We can at least vote politicians out of power.
So: there’s the technical, actual selection of human crappiness via algorithms on the web, (1), and there’s the de facto selection by virtue of the technology maturing and growing into every available cultural space, (2). All of this, seems to me, creates or exacerbates or just encourages the Tower of Babel syndrome:
(4) We don’t know how to have principled debates anymore, and the debates we do have don’t look or sound like the traditional ones. We’ve found new enemies, and now largely argue in the orbit of the AI selection algorithms. This is a bit like someone dropping LSD in the punch at a party, watching as everyone starts yelling and running around in rage and fear, and noticing that no one is interested in blaming the LSD on their new predicament. We just update the discussion, like LSD is supposed to be in punch. Some of this I suspect is due to worries about conceding some version of technological determinism—I’m suggesting the culture was effectively forcibly shifted not by godless Republicans or Democrats but technology, which made godless Republicans or Democrats more, errr, objectionable. (The website 4Chan accounted for much of the actual comments made by the Jan 6 crowd. 4Chan is a joke website that deliberately invents stuff and then sells it to everyone else as true. That’s humans, sure, but it’s also “AI.”) Anywho, technological determinism is a bridge too far for me, but it’s more true than a quick dismissal as well. And:
(5) My point about Hamas and Islam was to again point out how crazy our discussions are getting in the West—I brought up the Columbia encampments—and I suppose more substantively to suggest that many Muslims, certainly those in other predominantly Muslim countries, are not switching their initially religious-first dialogue or modus operandi as quickly as Christians and the rest of us, in the materialistic algorithmically-driven West. Our entire way of discussing issues has changed radically in basically a decade. In other places, not so much. There’s nothing to read into my comments here other than it shows, I hope, how the connection between “religiosity” and technoscience is multi-faceted, and as far as I can tell our culture has shifted the most, and the most quickly.
This is to be expected—stuff happens in the States, then spreads everywhere else. This I was hoping was an entirely unproblematic and obvious point. The Religious Right folks in the States are now using “AI”—you are, in one way or the other—to blast Biden and Democrats. Vice versa, of course. (Many) Muslims are still decrying mechanism and materialism, and wanting to talk directly about religion, not politics. They probably think “AI” is part of the problem, expressed in directly religious terms. (As I mention below, AI is also worldwide. That’s true as well.) This is generalized, I realize, and it’s now occurred to me that I can’t possibly do this discussion justice. (I’m now wishing I would have taken a comedic line on it.) But it’s a point that seems at least around the edges as true. Culture, religion, technology, and politics here in the consumerist West are “tightly coupled.” Technology has, as far as I can tell, changed us a lot more.
Soooo, it seems like it’s impossible to develop this thesis much (by the way, ironically because the thesis itself is that we’re in this belligerent political war!). The “Tower of Babel” idea is the fact that we can’t resolve issues, and can’t agree on truth, and can’t even agree on whether there’s LSD in our punch. I guess in a “meta” sense, that makes the point? Anyway, my original post is below.
]
Hi Everyone,
In this post I want to return to a theme I’ve hit on before, a theme I find fascinating and more than a little disturbing. It’s the question of modern culture and what’s happening to it—to us—and while in broad strokes my question here is not unique, the way I’ll present it here seems to be insufficiently appreciated, even among the obvious winners and losers in the new Culture War. So let me get started.
(And by the way, I’m trying to keep a lot of my writing free, but I do write for a living, so if you’re enjoying Colligo please consider a paid subscription. It really means a lot to me. It’s nice to buy groceries and pay mortgages. Thank you.)
The New Tower of Babel
We all know Babel (no, not the language learning company). It’s in Genesis. The Biblical story about God making so many languages and dialects and (let’s add) opinions that no one could understand each other or effectively communicate. One legacy of the triumph of digital technology and AI in every corner of our existence is that we’ve recreated this Babel. Let me try to unpack this, and bear with me if it seems I’m saying something derogatory about one belief or another—my aim is to avoid that game and try to explain the mechanism, the social and cultural story, by which our new Babel is ascendant, and the old ways of arguing and understanding each other are on the decline, if not on life support.
Start with an oldy but goody: the old war between scientific materialists and folks with traditional religious notions, like immaterial minds (think: souls) given or designed by a god, or more to the point, a Judeo-Christian God. That was an orienting debate for decades, nay, centuries. But we’ve Babel-ed it. We’ve Babel-ed it good. As we’ll see, it’s not just that debate either. More and more, it seems it’s reasoned debate itself.
Some housekeeping, if you’ll oblige. I have many friends and former colleagues who are traditionally religious. One of my former editors is an Orthodox Jew. Other writers and friends are Evangelical Christians. I’ve written for outwardly secular organizations with thinly disguised religious aims, and I’ve written at least a couple articles—long ago—for quasi-religious journals. I was in Paris last year having lunch with a former editor, a secular-ish Jew as far as I can tell but broadly sympathetic to traditional Judeo-Christian values and objectives. He mooted the Tower of Babel idea as a symbol of this age. We went on to talk about LLMs, which he thought would morph into AGI at some point (his take on it confused me coming from him, which I suppose made his Tower of Babel point. I thought he would be sceptical of AGI, or at least the notion that LLMs would acquire general intelligence). Cut to the chase: our new Babel is a symptom of an underlying condition. The condition is simple: technology won. Our new Culture War isn’t a war of ideas but a series of increasingly arbitrary and belligerent political skirmishes. It’s “binary” still—one side against the other—but less concerned with serious thought. It’s what my people believe, my tribe, and what you people—your tribe—does not. Everything is political today, what David Brooks called in his wonderful longform piece in The Atlantic “How America Got Mean” (September, 2023) a “sadistic striving for domination.” That’ll Babel you!
If you are asking politics to be the reigning source of meaning in your life, you are asking more of politics than it can bear. Seeking to escape sadness, loneliness, and anomie through politics serves only to drop you into a world marked by fear and rage, by a sadistic striving for domination. Sure, you’ve left the moral vacuum—but you’ve landed in the pulverizing destructiveness of moral war.
As far as I can tell, the elephant in the room here is the triumph of digital technology and especially AI. There’s nothing evil about technology or AI—it’s the effect our latest totalistic technological existence has had on our philosophical, deliberative, and thoughtful and (yes) religious selves. If it’s evil, we’re evil. It’s just that its success in our Big Tech guise of big centralized data analysis has hastened our own cultural confusion and demise.
AI Was Once an Intellectual Touchstone
Artificial Intelligence used to be a defacto target for the religious right because it clearly represented one side of a well-defined and age-old cherished debate, between the Dan Dennett’s of the world and the C.S. Lewis’s (to bridge a few generations). Between Godless scientific materialists and their silly idea about AI, and soulful religious folks, who know that human minds are God given and therefore inherently special and superior. As far as I can tell, the old religious arguments about AI were both substantive (or phenomenal) and inferential. They were about both the mind or consciousness as a “thing” separate from our brains, and about the cognitive powers and limits of mechanical systems. In both cases, AI was supposed to be woefully inadequate. Not a mind. Not given by God. Silver medal at best.
Today, I think the consciousness debate is still possible, but increasingly the arguments about inferential powers seem to be slipping away. Maybe an AGI will be like philosopher of mind David Chalmer’s philosophical zombie: super smart like a human brain but no lights on inside. Or maybe not. My point is that we’ve already largely abandoned the inferential claim. No one wants to be on the wrong side of history. LLMs are surely mindless but they’re also clearly chipping away at inferential claims.
To wit: warring against “AI” today in the old manner seems fruitless and more than a little stupid—and probably also hypocritical. It’s a bit like warring against cars or X-Ray machines or artificial limbs. AI is part of our day to day existence. This is part of the secret to how the old dichotomies and reductionisms morphed into a political maelstrom and an acquiescence to yesterday’s “enemy,” an enemy that now sits on your phone and is happily ensconced in the family car, as well as your laptop. The appliances in a modern kitchen now likely feature “AI.” Warring against it is going to become a problem.
He [it] is everywhere!
Here we see the first pivot toward the New Political. Technology is the ground we’re all standing on. The new war must be against someone or some group (not an idea!) who is screwing something else up, presumably by yacking about issues and venting rage and anger for causes we’re suspicious of or don’t support. The new Babel requires a new enemy. “Mechanism” is too abstract today, and, cynically, debating it gets too few likes online. That tribe over there isn’t abstract—they’re chanting “Death to Israel” on the Columbia campus. Welcome to the new world. Welcome to the new Babel. It’s now almost impossible to speak the same language, because the fragmentation into tribal battles was also occasioned by a loss of a common agreement about determining right and wrong. By the loss of a common language, a lingua franca.
The problem with turning away from the old orienting debates about the nature of the mind or soul, the good life, and the limits of technology and cognitive science and AI is simply that political wars no longer happen explicitly at the level of ideas, but identities and groups, which among other things are great at multiplying beyond reason and control. Here’s an example. When I was in Palo Alto before Covid, there was “a wall of scepticism”—as famed entrepreneur and venture capitalist Marc Andreessen memorably put it—about vaccinations by those left of center (in Palo Alto, this means pretty much everyone). Though Trump ended up broadly supportive of Covid vaccines, the center right and the “MAGA” crowd ended up pushing an anti-vax agenda. The fine folks in Palo Alto then simply caught collective amnesia, and began a vociferous and highly condescending campaign in favor of vaccinations. Free speech got bullied, tempers flared, and the notion that, as Bill Marr put it, there’s such a thing as the science got so roundly politicized that western science itself largely submerged into the murky putrescence of politics gone amuck. To step into this blood sport, Napalm your village mess and attempt to resuscitate talk about—what?—scientific materialism would also get shouted down or ignored. What’s the point?
Here’s what happened. Politics just swallowed religion. It just swallowed science. And it just swallowed you.
Unfortunately, Technology is to Blame.
I sometimes get accused of giving digital technology too much credit (or blame) for Babel-ing our culture. I don’t think that’s true. It’s almost impossible to imagine, in the span of a decade or two, such an about-face in our society if we weren’t now essentially living online and cohabitating with digital technology and the web, and this once suspicious atheistic materialistic idea of “AI.” I don’t have access to a counterfactual of course (wormhole, anyone?), but reflect with me on how much changed so quickly, and let’s reflect on how nearly every upheaval, discussion, trend, and violation of the law is now some major online discussion with tribes throwing feces and spears at each other, amplifying disagreements into yelling matches without a shared notion of truth. Reflect on all that. How can it not be traceable back to the digital technology revolution that has defined the millennium so far?
Telegram Just In, Sir. It says “Duck.”
Last night I watched a documentary on the Watergate Scandal, and the public took to sending telegrams to committee members and other politicians voicing their disgust with Nixon’s prevarications and increasingly obvious obstruction. They demonstrated. They made phone calls. They talked to an eager press. But through all of it, there was a shared notion of truth—even Nixon’s replacement for special council (the first guy was actually demanding the tapes), who was supposed to be supportive of the President soon abandoned him in face of truth and facts—and a shared sense of outrage. In our Babel-world today, we have endless battles vying for truth, to the point where it seems to stop mattering what truth really is. I have a hard time believing the pro-Palestinian Gen Z protestors chanting anti-semitic slogans and pitching tents on the once respected grounds of Columbia University have much of a grasp of the history of that conflict, and have thought very deeply about the sort of values Hamas openly espouses. Hamas isn’t really bullish on free speech, feminism, Jews, Americans, and non-Muslims.
Back to tech, if we had to use Western Union to send hateful telegrams or pay exorbitant long-distance fees to criticize our leaders, the tech wouldn’t seem so larger than life—so part of us—and perhaps we’d be more disposed to Socratic dialogues in classrooms and centuries old discussions about souls and heaven and all the rest. Digital technology by my lights is a smoking gun here. Technology is to blame—Churchill famously quipped that we shape our buildings, then they shape us—but that’s like saying cars are to blame for car crashes, or that suburbs are to blame for middle class boredom. We’re in the car. We’re in the suburb. We’re using “AI” and all the rest. Back to AGI.
AGI Still Gives Religious Folks a Queasy Feeling. Culture Says? Get Over It.
I was on the phone recently with a friend who I know to be a good guy and an Evangelical Christian. He’s a smart, educated guy and is up to speed on computational and AI issues. We were discussing my second book project, and he was making the point that people wouldn’t use LLMs if the hallucination problem was so severe that their responses weren’t generally reliable. Eventually, we made our way to the “scaling hypothesis,” a fancy term for the idea that OpenAI and the rest of Big Tech promote LLMs as a path to AGI. It’s not just Big Tech: AI enthusiasts everywhere frequently if not universally make this claim. For various reasons, I find the claim not believable (yes, I will write about my reasons in another post!). I argued that the “cognitive” architecture of neural nets using transformers did seem to show emergent intelligent properties—somehow “knowing” grammar is a good example—but that the ability of such systems to “think” at the level of propositional thoughts, goals and plans was clearly limited. Something about the approach seems fundamentally wrong, or at least incomplete.
I added in a kind of peroration that if I was fortunate enough to happen upon the formula for true AGI—whatever it is—I’d feel pretty special and grateful that Fortuna shined her light on me as an AI scientist. What’s the point of working on AI if you don’t want it to get smarter, solving problems that our systems today still can’t manage? We don’t see aeronautical engineers working on fuel efficient designs but not too much. “Leave something for tomorrow, Bob. Don’t solve it!” Makes no sense, right? I commented cheekily that I didn’t suppose “I’d go to hell” for scientific innovation. The problem is actually innovating. Inventing. (I should add here that my idea of achieving AGI is sans machine consciousness or even motivations, emotions or desires. AI to me is a game of making more powerful inference.)
Here’s the point. Religious conservatives as a group are vastly more likely to be skeptical of AI futurism, and at least anecdotally I can tell you that, historically, they’ve been rooting for the other team all along—humans, us. Not AGI. That’s broadly my position, too, but it’s not in the Paul Bunyan and his Babe the Blue Ox way. We’re living with our technology, and increasingly it’s doing stuff that we can’t. In a simple sense, it’s a good thing—Bunyan would be awesome with a new Stihl MS 500i chainsaw (I suppose today it’d be an electric model). It’s complicated, of course, because as I’ve said many times before, Dataism as a religion, data-driven centralized AI, the unholy alliance between Big Tech and government, social media dysfunction, and all the rest are not “Paul Bunyan” points. They’re points about stuff “going to the devil,” as Dostoevsky once wrote (his point was about the replacement of the human will with science, a slightly different but related worry).
But here’s again the point: all the “going to the devil” concerns or accusations about digital technology and the web are traceable mostly to advances in AI. The point I’m making is about the powers of AI, not whether it increases suicide rates, but AI drives the web (see my addendum up top). That’s the argument I’m talking about, at least foundationally. And that ground has shifted and our ideas have shifted and our discussions are fragmented where they were once clear lines in the sand. AI was a sideshow before, a fun way to get at different theories of mind and views of science. Now it’s the show.
Let’s take one more pass at the original thesis, that the old ways were I think the good ways, but no matter, because they’re largely gone. They’re gone even among folks who made a living talking about them. They’re disappearing even—it seems to me, I have no polling data—among the Judeo-Christian space. David Brooks, in his piece I mention above, claims that “Evangelicalism used to be a faith; today it’s primarily a political identity.” Sounds bad. Sounds like more Babel is coming. How do you articulate a reasoned position with “political identity” as a starting point?
As might be expected, my friends and colleagues who are right of center have largely turned to Trump as presidential timber, but there’s nothing traditionally religious about Trump, and by my admittedly outside-looking-in lights (I skipped the last two elections) he makes former President Clinton and the Monica Lewinsky scandal seem rather tame: “Oh, it was just that one intern? Gotcha. Just not a celebrity porn star or….”). But delving into politics would certainly blunt my point, that everything is now politics, including philosophy and science, and that this new orientation was created by and is form-fit to the new AI-powered digital online world we all live in. Tribal outrage is the new Bertrand Russell versus C.S. Lewis. Russell would bore everyone to tears—he might actually go camp with the protestors in Columbia, so perhaps no—and Lewis wouldn’t get enough likes if he wasn’t anti-vax and a MAGA supporter, and insisted on talking about the problem of evil—but not about the evil Democrats.
The old dogs wouldn’t understand our world, and would get lost in our Babelly shuffle online. Alan Turing might bone up on a few decades of AI innovations, and understand transformers and LLMs—he was a mathematical genius, after all. But Turing thought a conversational AI would learn like a person did, getting instructed somehow, not by optimizing an objective function with endless (stolen?) data. He thought such an AI would eventually become mind-like, which is why traditional religious thinkers found the idea incompatible with their world view and threatening. Turing, of course, was an atheist, and when he debated the theistic scientist Michael Polanyi, their polite arguments while sometimes technical would not fall prey to political action committee language or a Babel-like mix and match of ideas not thoughtfully considered. No one would gin up outrage and try to cancel Turing, or Polanyi. We used to learn this way, and the culture understood it needed to respect both sides, and listen. It doesn’t work this way anymore, and technology this century played a major role.
Brooks is right. We’re in the moral vacuum of endless political warfare.
Maybe, too, the techno-futurists were right. Maybe we really are losing to the machines. Maybe that’s what our new Tower of Babel world is trying to tell us. What better than endless episodic, emotive “data”—text, images and sound—for our new techno-wonder world. What good is a thoughtful treatise?
What do you think? How much is the success of AI and the web it makes possible part of our human challenges?
Erik J. Larson
Hi Eric, can you email me? I
The point of Colligo is to bring people together, and to that end I’d like to get a forum post with you and others. I’m interested in these many voices and mine is just one. Your idea strikes me as productive and fundamental, and I’ve read many comments here that indicate a very engaged and smart group. Let’s figure out a platform post to get the ideas out?
I have a dark thought. I don't know if it's worth anything.
After the introduction and widespread institution of the metric system, the myriad idiosyncratic modes of measurement appeared imprecise, backward, irrational. For these modes of measurement derived their intelligibility from workaday experience of the human body, and each emerged from a place with a particular history, particular ecology, and particular geography. But here's the thing. A place with an unusual measurement system places a material demand on visitors to grasp the sense of that system, to see the sense it makes in context. A global, uniform system of measurement is advantageous in too many respects to recount, but virtually all of them come down to efficiency. In its very effort to bring a universal, frictionless format to the proceedings, it deprives one of the need to learn how to translate a foreign measure into one's own system. Paradoxically, imposing a common "language" of measurement to free us from inefficiency at the same time releases us from the responsibility of having to be good at communicating with one another.
It's a common theme, in Nisbet and others, that the emergence of powerful, centralized superstructures and the emergence of powerless, windowless monads are mutually reinforcing. Global uniformity and global atomization both spell the disappearance of a dappled world of robust, local folkways that oblige you, if you want to deal with them, to work your way into understanding their more or less alien way of going on.
Maybe the response to Babel is not a renewed lingua franca, which might just reinforce the atomization. Indeed, perhaps we're already confronted with one, at least functionally. Algorithmic filtering and statistical prediction, imposed from above, is like a uniform measurement system in relieving us from the demand of understanding. Think of the proliferation of "aesthetics," such as Dark Academia, Light Academia, Coastal Grandmother, etc. They constitute a lingua franca not in the sense that we communicate something of ourselves with them but in the sense that they manifest communication's reduction into the universal circulation and recirculation of immediate consumables for likes.