The point of Colligo is to bring people together, and to that end I’d like to get a forum post with you and others. I’m interested in these many voices and mine is just one. Your idea strikes me as productive and fundamental, and I’ve read many comments here that indicate a very engaged and smart group. Let’s figure out a platform post to get the ideas out?
I have a dark thought. I don't know if it's worth anything.
After the introduction and widespread institution of the metric system, the myriad idiosyncratic modes of measurement appeared imprecise, backward, irrational. For these modes of measurement derived their intelligibility from workaday experience of the human body, and each emerged from a place with a particular history, particular ecology, and particular geography. But here's the thing. A place with an unusual measurement system places a material demand on visitors to grasp the sense of that system, to see the sense it makes in context. A global, uniform system of measurement is advantageous in too many respects to recount, but virtually all of them come down to efficiency. In its very effort to bring a universal, frictionless format to the proceedings, it deprives one of the need to learn how to translate a foreign measure into one's own system. Paradoxically, imposing a common "language" of measurement to free us from inefficiency at the same time releases us from the responsibility of having to be good at communicating with one another.
It's a common theme, in Nisbet and others, that the emergence of powerful, centralized superstructures and the emergence of powerless, windowless monads are mutually reinforcing. Global uniformity and global atomization both spell the disappearance of a dappled world of robust, local folkways that oblige you, if you want to deal with them, to work your way into understanding their more or less alien way of going on.
Maybe the response to Babel is not a renewed lingua franca, which might just reinforce the atomization. Indeed, perhaps we're already confronted with one, at least functionally. Algorithmic filtering and statistical prediction, imposed from above, is like a uniform measurement system in relieving us from the demand of understanding. Think of the proliferation of "aesthetics," such as Dark Academia, Light Academia, Coastal Grandmother, etc. They constitute a lingua franca not in the sense that we communicate something of ourselves with them but in the sense that they manifest communication's reduction into the universal circulation and recirculation of immediate consumables for likes.
It's okay, Erik. I'm not sure I'm making much sense with that comment. It just seems to me that the desire for a lingua franca to smooth communication and prevent fragmentation shares a lot with desires for other schemes with universal ambition, which end up eroding the conditions, structures, and incentives for greater understanding and thus conduce to the very fragmentation they intended to prevent. It's analogous to, or maybe continuous with, what Matthew Crawford collects under the epithet "deskilling."
As another example, many view the translation of non-English works of philosophy into English as an unalloyed good, not least because it allows English speakers to encounter the philosophical ideas of non-English speakers. I know I've benefitted from this. But I can also report that, now that I'm able to read philosophical German, I get much more understanding out of working to translate German into my home language of English. I feel closer to the author than otherwise, because I feel as though I'm inhabiting more of his perspective, because I really have to work to make the best sense of him I can. The imperative of charitable interpretation is built into the task in a way it is not when I'm reading something in my home language. From a certain perspective, the task looks glutted with inefficiencies. From a different perspective — the perspective I'm trying to give voice to — it looks like it channels you into better understanding.
And, look, mine is not an elitist argument that assumes we all have time to learn other languages and read things in those languages. (Besides, I know plenty of blue-collar types who've spent their off hours learning to read foreign languages, and what they get from translating one piece in another language is more than I get reading a hundred pieces in English. It's about quality of attention, not quantity of things attended to.) It's simply an example illustrating how, even with the best intentions, making things easier, smoother, more convenient, more accessible, etc. can end up making things worse because the good stuff came with the difficulty, the roughness, the inconvenience, the inaccessibility, etc.
The desire for a lingua franca is animated by the idea that we could finally be transparent to each other. But our condition is always to be partially unknown to each other — that's just what it is to encounter another person. To paraphrase Stanley Cavell, you know you've encountered the reality of another mind when you have to live with your misunderstanding of it.
I think Crawford is important, by the way. He’s one of the few people I know today who did not get sucked into the tech vortex but says things eminently relevant to the tech vortex…. A bit of a magic trick, that one!
Eric, let me offer a friendly amendment, which might offer some hope. The metric system replaced other systems of measurement, mostly. I still mostly think in English measurements. But if we consider the idea of a lingua franca, it does something else. Consider, in order, Greek for the Mediterranean world, and then Latin until the early modern period, and English today. These languages allow far flung peoples, and especially scholars, to converse in some fashion, which great, but they do not replace local languages altogether.
As we struggle for a modus vivendi with our technologized world, maybe this isn't a bad model? But - to respond to where Erik started and then amended -- that would probably require some agreement on "neutral" or "objective" language. This is why, imho, the decision of the NYT that it was NOT the paper of record, but was attempting to foster a better nation, was significant. One could make analogies in international law, btw.
Thanks for this insight, David. I'm being simplistic and dramatic, I realize. And romantic, in my desire for a world of proper subsidiarity.
I detest the way the current technological regime was imposed and has now insinuated itself into the details of life itself, all without so much as a public conversation about it. Only the NGO classes would ever want it to be the case that owning a mobile camera phone was required to order food at an establishment. The imposition was designed to be insulated from democratic contestation, and it was nourished and protected by misguided neoliberal policies. Since I think a public conversation about it would be desirable — even if it is belated — perhaps I should conclude that some kind of common format for talking about these things would be desirable.
But I'm really not so sure, because as much intellectual pleasure as I derive from exchanging thoughts online in a forum such as this, it seems to me that the only way shit gets done is grassroots organizing and "face-to-face" conversations that move gradually up or across spheres of authority. And unlike online conversations or viral congressional grandstanding videos or manifestos, face-to-face conversations don't require a lingua franca. For you already share a grammar of concerns and interests, so to speak, due to your sharing a form of life. Or, if you don't share such a grammar, you engage in good-faith negotiation and translation of terms, made possible because neither party is simply shouting demands in the quad to nobody in particular nor issuing diktat in speeches to his loyal subjects.
Mankind is obviously in a post-Babel condition, engaged in a constant struggle to control chaos and dissonance. We attempt to do that through creation of fragile connective tissue (democracy, diplomacy, human rights, economic interdependence, rule of law, ethics, institutions). That tissue is subject to recurrent infections and breakage, as we have seen time and time again. One would assume that the race to recreate a utopian, pre-Babel, chaos-free “New Tower” (that provides tool that outperform humans, and is guaranteed to“benefit all humanity!”) would strengthen mankind’s connective tissue. Yet in that very race lies the potential for tragic consequences. You’ve highlighted one of the most pernicious — the harmful impact of selective algorithms. They populate the Coliseum in the shadow of the “New Tower” — reveling in “angered and polarized responses.” Panem et circenses…. Seneca foresaw this when he described how watching people slaughtered in that arena makes spectators “more greedy, more ambitious, more dissolute, and certainly more cruel and inhuman…to consort with the crowd is harmful; there is no person who does not make some vice attractive to us, or stamp it upon us, or taint us unconsciously therewith. Certainly, the greater the mob with which we mingle, the greater the danger.” The algorithmic affliction disguises itself as cruel entertainment while it relentlessly attacks our fragile connective tissue. What happens when that tissue breaks? “When the Vandals were hammering at the gates of Hippo, Saint Augustine’s city, the groans of the dying defenders on the wall mingled with the roar of the spectators in the circus, more concerned with their daily entertainment than with even their ultimate personal safety.” (Lewis Mumford, “The History of the City,” 1961)
“How do you articulate a reasoned position with “political identity” as a starting point?” - I don’t think you can, and if I’ve read you correctly, I suspect that was the rhetorical point you were making there. Politics has poisoned public discourse and I think you’re right in saying that our accelerated discourse has fuelled the process exponentially. No time to think - just react!
I’m also impressed by the identification of a missing capacity to debate and arrive at a shared, accepted truth, contingent though that may well be. Harry Frankfurt’s excellent “bijou” tract “On Bullshit” sprang to mind: his thesis was that a true bullshitter just wants to win: objective truth is unimportant. So much of where we’re at chimes with that, though of course, the road travelled to get there was somewhat different than that envisioned in the book. In the end, though, here we are in the age of the bullshitter.
I liked "Here’s what happened. Politics just swallowed religion. It just swallowed science. And it just swallowed you." and go further: Politics provides the current religion - and technology (especially for communication) and AI helps it in this and is actually part of the new religion (as endless progress making some people rich and others powerful) - though IMO AI in the sense of targetting is a bit different from LLM which we may presume to know "all we know" - in the fist case pandering to our biases and emotions and in the second, limiting us to what was already known. And so, we are trapped in what we know now and what we have degenerated to be, culturally - the human ability for breaking out of the confines of that ever so often, e.g. with a new religion, checkmated.
I don’t agree that technology is the root cause of our current day Tower of Babel situation. Who benefits from the Tower of Babel situation? There are people who are financially benefiting from it and there are people dying in the conflict zones. Technology, AI was misused for societally destructive objectives by a subset of people, that’s all. This subset of people created the smoke and mirrors communications strategy and the narrative that AI or technology is to blame, as a side show. But the main show - the daylight robbery is progressing and even accelerating.
Hi Jana, I get it. Let me look later tonight and I’d like you to be part of this discussion and am happy that you are! It’s complicated and sometimes I just feel swamped by it all. I’ll be in touch!
Thank you Erik for the addendum. That set it straight 😊LSD in the punch bowl and we are all it. I get the metaphor. In that respect, yes, it is the tech that got us “babelling”. I guess, the desire to voice whatever comes to one’s mind is in all of us (it is refreshing, and liberating, I guess)🫣 however, that kind of negates all we have been trying to achieve and achieved. I think, it is up to us to start moderating ourselves. And we will. When? Not sure, we are sipping from the punch bowl… My freedom ends where your pain begins … we are all feeling the pain.
I’m in a very loud restaurant at the moment but I think you’re onto something here. It’s the paradox I’m trying to get at, which is why I bring in examples that don’t fit an obvious puzzle. We need the granularity to return to our it crudely. I would be interested in getting these ideas out more! I’ll try to reply in lore detail later.
Very nice, Erik, but I disagree isn't quite the right word, on a few fronts. Btw, I just sent that Wierda explanation of LLMs to my faculty.
First, the turn to politics is older, at least among the sorts of elites that comprise your world. I finished my first/biggest book, City of Gold: An Apology for Global Capitalism in a Time of Discontent:
"for at least the two hundred and ten years since the French Revolution, intellectuals have sought religion in politics, including the politics of markets, and have failed to find it. Although we can and should hope to build a better polity, the City of Gold is not, and never can be, the City of God."
Second, we are seeing lots of resurgence of some sort of religiosity, and its not just Peter Thiel, talk of the singularity, and the like. BLM and Gaza and Me Too and so forth are not "secular" phenomena. Will that lead to conflict? Probably. But the key point is that a notion that politics can be separated from religion is an oversimplification. Jefferson was wrong, if you will.
Third, I'm feeling if not entirely upbeat, at least a little better. I'm talking to scientists and others, including you, who are thinking both in more embodied fashion, what is it like to be human, and in more explicitly transcendent ways: what do we care about? There's a huge amount of this on substack. See Springtime, Weeds, Computer Science and Defense Policy,
Fourth, or maybe this is just more of two and three, we live in a time of great hype. Chess computers did not replace chess. And chess computers are good at what they do. As you've written at length, probabalistic AI isn't nearly as good at what it claims to do, and seems to be at some sort of plateau. None of the CS scientists I talk to seem to disagree fundamentally. Blockchain blockchain blockchain . . . plus ca change.
Fifth and finally (for now!) I'm reworking a years long project I think you'll like, currently entitled "Quixote's Dinner Party" It's about finding meaning in Babel.
Anyway, you're not wrong, it's a question of voltage, imho. Keep up the good work!
I take your point about politics and religion being inseparable already, and I'm interested to have a look at your book. Let me try to shore up what I think is happening based on more praxis than theory.
We do see lots of "religiosity" today (was this Weber? What an obnoxious word), but in the straightforward "war against materialism" way, the new prize seems like it goes to Islam. Hamas seems pretty religious, to take what's in the news. And yes, my friends and colleagues who are Evangelicals or what have you still believe and take seriously core Judeo-Christian teachings and beliefs. But the LANGUAGE of Evangelicals is morphing into more the online experience, and it's morphing into more explicit political prose. As David Brooks put it, Evangelicals are quietly but somewhat rapidly becoming a political movement--politics first, politics is the discussion. Muslims still seem more intent on speaking in a religious-first way, and though of course they're political, they're also at least in the Middle East countries of which we could quickly name a few, less beaten down by a constant and perpetual online experience that's more interested in political battle than religious war (perhaps we need an edgy comedian here, they seem to get at the point more easily). If religious purity were the ideal, Evangelicals would have jumped ship from the MAGA crowd a long time ago, which of course is entirely political and ... well I'll leave it there (by my own thesis, I'm walking into a war). So in our case, the transformation of our world by technology has pushed still-True-Believers to use and engage in a more "Tower of Babel" language suitable for fighting Democrats online, or vice versa. I actually know this is more or less true, from a small but I think reliable sample anyway, because I have friends who simply abandoned having debates with Dan Dennett types and now use LLMs like everyone else. They even sometimes admonish me for thinking they won't get better in certain ways. Wow! My view is that they've shifted not entirely consciously into a different type of language that's more suitable for political battles and are notably less literal about propositional claims about what happened thousands of years ago--though of course if you ask them this--prompt them in the right context, as it were--they will produce all of these points as always.
I can't write another Ph.D. thesis to get the point across, of course, but something like that HAS happened, in my view, and quite obviously. My comment here is an observation based on what you commented on; my overall thesis I'm throwing out there is that technology has played a HUGE role in how this has transpired, and to not see this would be by my lights somewhat amazing! We can argue about details, to be sure. I'm sure I have much to learn. Thanks, David.
And by the way, David, I take your point as I understand it that politics and religion have been in every era inherently tied up with one another. I was a bit young in the 1980s, but certainly recall how in the 1990s you had a decision procedure for deciding given what someone believed about religion who'd they'd vote for, what they believed about regulation and capitalism, and all the rest. What I'm trying to get at is how it's somehow DIFFERENT today, and the difference seems to me to be quite obviously the rapid seachanges in our digital world. The point about AI is handy, because up until the last few years that was a punching bag for godless materialism, now it's what we use to find information. I that's the best I can do!
Perhaps I should start a new thread with this, but let me say one more thing here in hopes it will help. Let me switch examples in hopes my little idea will become clearer. In the mid-1990s (say), if you asked a traditional religious person (include here Christianity, Judaism, Islam but I suppose not Buddhism) and a scientific materialist whether Darwinism were true, with rare exception you'd get a nice line in the sand and a debate going. The former would insist Darwinism is an undirected process and therefore must be false because of theological reasons A,B,C and empirical evidence X,Y,Z, and the latter would say.... the opposite. It must be true. That's what would happen. Today, in the Tower of Babel world, some clever chap would chime in before the debate had a prayer (no pun), insisting that UFOs likely the planet something about panspermia, and when all that ran out of steam, take up an impassioned explanation of a "well-known" conspiracy to cover something up pertintinent to discussing the issue. After a few hours of this nonsense, the original line in the sand folks would walk away. What will they be likely, more and more to do? Something political and complaining and angry. An "F-you." Eventually the entire intellectual landscape shifts. I see this happening over and over again. We can't fix this if we don't see this clearly! And, sadly, maybe it's just not fixable anymore. I dunno.
I do not see this as true though, if you go to somewhere like mindmatters, you see traditional creationism and the like, or if panspermia is advanced, it is a way to provide a "third road out."
I don't doubt that people have felt the need to compromise for their terminal objectives - Trumpism being an example of it: you are much more likely to accept his sins if you feel drag queen hour is a bigger issue and e.g. I am willing to compromise on almost any issue for human survival.
But that seems only natural: the process of compromising for "terminal goals", which for many on the religious side may indeed be "being on Team Human."
I hear you about art long and life short! And I agree with many of your observations. A medium sized response and a bigger one. Political Islam is really, really interesting, and something I did a fair amount with back in the day. The big mind here is Olivier Roy, who coined the phrase. Some of this turns up in my books on security, one of which is expressly addressed to US responses to Islamist violence (I wouldn't recommend reading unless you want to go back down the security rabbit hole!)
But the real problem/opportunity you point to -- and have been pointing to -- is just how hard it is to talk about technology. It's amazing how much more philosophy of science we have than philosophy of technology. So AI just kind of happens, and people move on . . . it's hard to even ask the question, as I'm finding on my faculty. This will be done . . .
So without a theory, or something less, a kind of articulation, we have real difficulty talking about technology in any way much more sophisticated than "maybe it will cure cancer" or "maybe it will unemploy everyone". Some basic utilitarian speculation, some recycled alienation, that's about it . . .
Which, to be encouraging, is what I hope you are working on . . .
I have a different perspective on what you’re describing. I genuinely don’t feel that digital technology has razed the kinds of debates and big metaphysical ideas that were current in the 20th century. Rather, I think these ideas themselves have run out of steam, at least on the terms that we grew up with. They no longer give us a reasonably accurate picture of our impulses and the world we live in, and it is their inadequacies to this historical moment that have precipitated their failure. Digital technology may have exposed and hastened their demise, but it is not a primary cause. The obsession with AI reflects, but has not predicated, a kind of intellectual and spiritual exhaustion.
People turn to politics because, without these frameworks, the cultural situation is genuinely confusing, and politics is very close to the surface as a cultural phenomenon, and therefore easy to access intellectually, and easy to participate in. Unfortunately, this also means that our politics constantly threatens to become purely destructive and reactionary. Trump is a synecdoche of this problem (as I’ve written about here: https://www.arcdigital.media/p/that-you-havent-been-told), and what he offers is an imaginary escape from responsibility. I think the “polarization” argument you’re suggesting here misses the grim outcomes that will be visited upon us if Trump and his MAGA cronies take control of our institutions -- instead of normie Democrats and the vestige of “Haley Republicans” that haven’t left the GOP. The end state of Trumpism is Putinism. Any other group in power will leave our institutions in their status quo – which can be messy, confusing and dissatisfying, but where individuals are basically free to go about their business. If everything in culture is political, your average citizen will have the political sophistication of a small child, with all the tantrums and tyrannical urges that come at that age.
In several essays, I’ve discussed my theory of why the old ideas are failing (reading this material might give more context to my perspective). We’re only a few hundred years into industrialization and this change is not just an economic shift, but a change in every aspect of culture and in the way the world presents itself to human psychologically. Our main religious ideas are all agrarian in their structure and purpose, and therefore not well suited to the new forms of culture and human experience. They won’t survive prolonged contact with modernity.
If you accede to my framing, then a course of action presents itself. Come up with something different! Something that isn’t the same thing that everybody has been saying for a long time. If there’s not a gun pointed at your face, you’re not required to be spiritually exhausted by events in the world. You can choose not to be co-opted; all you have to do is wander off and try something new. Maybe you too will fail, but if you succeed, you can share what you discover with whomever is interested.
One of my favorite poems by William Carlos Williams comes from a piece out of Paterson:
Without invention nothing is well spaced,
unless the mind change, unless
the stars are new measured, according
to their relative positions, the
line will not change, the necessity
will not matriculate: unless there is
a new mind there cannot be a new
line, the old will go on
repeating itself with recurring
deadliness: without invention
nothing lies under the witch-hazel
bush, the alder does not grow from among
the hummocks margining the all
but spent channel of the old swale,
the small foot-prints
of the mice under the overhanging
tufts of the bunch-grass will not
appear: without invention the line
will never again take on its ancient
divisions when the word, a supple word,
lived in it, crumbled now to chalk.
If you want a starting place, try this:
What is consciousness, and why is it a “problem”?
I didn’t write this in a public thread, but I emailed you some thoughts that I have on this subject that I haven’t seen in academic or public discourse. You or anybody else is free to take a stab at outlining your own ideas, and we can argue about it (in the sense of intellectual debate, not rancorous mudslinging) – it’s not like academic or public discourse has been particularly successful on this topic.
For those who feel that AI may indeed be an emergent threat to human thriving, existential or otherwise, and wish that we have some way to do something about it, I would like to link PauseAI and upcoming protests here.
Have you seen the EU Parlement building? This shows that the differences of "identity" are so great that the atheists will take a symbol from a story about "bad folk" in the Bible and think its "good"marketing. Just like the Apple logo.
I think if a soulless machine has a mind, it is pretty much a death sentence to humanity. As someone on the side of humanity, I'm sure you could see why this is a bad thing.
And I felt your discussion was a bit confused but isn't it obvious that the Tower of Babel, even before AI, is harming humans increasingly as a net?
Should we see this as a good thing? I admit, as a theist and a traditional Christian, the entire soulless mind is deeply disturbing and certainly the ideas of human extinction or "mind uploading" additionally so.
I'm no fan of b.s. like "mind uploading," and I tried to make a distinction between soul/consciousness and inference (how "smart" it can behave). I do not believe that computational systems can have minds, like we do. I see no reason to believe that, even though it seems the inferential question--how powerful they can behave--is getting gradually won by AI, because the systems are getting vastly more powerful (but not mindlike). Does it make sense? I'm not arguing for machines with minds.
But something mindlike enough can nonetheless be an existential risk, regardless if it is a "true mind." I mean, an adequately powerful virus is an existential risk, even if it has no mind, but only mindlike planning and mutating skills.
This is where I think your argument about "I should build AGI if I can" falls flat. Because that can very well be building "death of all organic life" and I think that is not a good thing :)
That's a good point. So--and I'm not being coy here--if you were an AI scientist, what would you do? The typical response from folks in the field is to try to build in safeguards, and that whole process I find a bit silly. My point in the post was counterfactual: IF I somehow had the idea of the century, I would consider myself a successful AI scientist. You raise an interesting objection to my line of thought there! I'm not sure what to say!
I suppose I would say that if we scaled or invented a general intelligence, it still wouldn't have motivations or consciousness. So in that sense, the existential risk factor wouldn't have any sci-fi to it, so to speak. It would be a matter like nuclear weapons--dangerous but not motivated to wipe us out on its own. If an AGI actually had motivations and desires, we'd be in big trouble, but to me that's almost non-sensical. It's an adding machine, made superfast and engineered for cognitive outcomes. That's the best I can do for now!
1)if I was an scientist who discovered AGI, I would do everything possible to stop it from existing. My ego isn't worth human extinction.
2) I am not sure if "motivation" as a complex entity is needed. I am glad to consider the soulless being to be mindless, but even something as surely mindless as a social engagement algorithm has a "motivation" which is to maximally promote engagement so that "it will survive" so that the company will not use another algorithm.
Likewise, AGI exists to replace humans so even if it doesnt have the motivation, by existence of its superior characteristics, it will select humans out of existence.
3) I think as with Hinton that anything with a goal function that can set subgoals can be very deadly, see also instrumental convergence.
In short, if it ever can do everything humans can, we are probably boned.
Also I would like Yoshua's idea for AGI: a purely isolated science entity with no awareness of the real world. This means, of course, absolute non-deployment outside of CERN-like entities. Narrow AI released for use after testing would be vastly safer and innately capability bound.
Hi Eric, can you email me? I
The point of Colligo is to bring people together, and to that end I’d like to get a forum post with you and others. I’m interested in these many voices and mine is just one. Your idea strikes me as productive and fundamental, and I’ve read many comments here that indicate a very engaged and smart group. Let’s figure out a platform post to get the ideas out?
Will do, Erik.
I think it's at the bottom of his guest post, Erik.
I have a dark thought. I don't know if it's worth anything.
After the introduction and widespread institution of the metric system, the myriad idiosyncratic modes of measurement appeared imprecise, backward, irrational. For these modes of measurement derived their intelligibility from workaday experience of the human body, and each emerged from a place with a particular history, particular ecology, and particular geography. But here's the thing. A place with an unusual measurement system places a material demand on visitors to grasp the sense of that system, to see the sense it makes in context. A global, uniform system of measurement is advantageous in too many respects to recount, but virtually all of them come down to efficiency. In its very effort to bring a universal, frictionless format to the proceedings, it deprives one of the need to learn how to translate a foreign measure into one's own system. Paradoxically, imposing a common "language" of measurement to free us from inefficiency at the same time releases us from the responsibility of having to be good at communicating with one another.
It's a common theme, in Nisbet and others, that the emergence of powerful, centralized superstructures and the emergence of powerless, windowless monads are mutually reinforcing. Global uniformity and global atomization both spell the disappearance of a dappled world of robust, local folkways that oblige you, if you want to deal with them, to work your way into understanding their more or less alien way of going on.
Maybe the response to Babel is not a renewed lingua franca, which might just reinforce the atomization. Indeed, perhaps we're already confronted with one, at least functionally. Algorithmic filtering and statistical prediction, imposed from above, is like a uniform measurement system in relieving us from the demand of understanding. Think of the proliferation of "aesthetics," such as Dark Academia, Light Academia, Coastal Grandmother, etc. They constitute a lingua franca not in the sense that we communicate something of ourselves with them but in the sense that they manifest communication's reduction into the universal circulation and recirculation of immediate consumables for likes.
Hey Eric! I have to think on this! My head currently hurts!
It's okay, Erik. I'm not sure I'm making much sense with that comment. It just seems to me that the desire for a lingua franca to smooth communication and prevent fragmentation shares a lot with desires for other schemes with universal ambition, which end up eroding the conditions, structures, and incentives for greater understanding and thus conduce to the very fragmentation they intended to prevent. It's analogous to, or maybe continuous with, what Matthew Crawford collects under the epithet "deskilling."
As another example, many view the translation of non-English works of philosophy into English as an unalloyed good, not least because it allows English speakers to encounter the philosophical ideas of non-English speakers. I know I've benefitted from this. But I can also report that, now that I'm able to read philosophical German, I get much more understanding out of working to translate German into my home language of English. I feel closer to the author than otherwise, because I feel as though I'm inhabiting more of his perspective, because I really have to work to make the best sense of him I can. The imperative of charitable interpretation is built into the task in a way it is not when I'm reading something in my home language. From a certain perspective, the task looks glutted with inefficiencies. From a different perspective — the perspective I'm trying to give voice to — it looks like it channels you into better understanding.
And, look, mine is not an elitist argument that assumes we all have time to learn other languages and read things in those languages. (Besides, I know plenty of blue-collar types who've spent their off hours learning to read foreign languages, and what they get from translating one piece in another language is more than I get reading a hundred pieces in English. It's about quality of attention, not quantity of things attended to.) It's simply an example illustrating how, even with the best intentions, making things easier, smoother, more convenient, more accessible, etc. can end up making things worse because the good stuff came with the difficulty, the roughness, the inconvenience, the inaccessibility, etc.
The desire for a lingua franca is animated by the idea that we could finally be transparent to each other. But our condition is always to be partially unknown to each other — that's just what it is to encounter another person. To paraphrase Stanley Cavell, you know you've encountered the reality of another mind when you have to live with your misunderstanding of it.
I think Crawford is important, by the way. He’s one of the few people I know today who did not get sucked into the tech vortex but says things eminently relevant to the tech vortex…. A bit of a magic trick, that one!
Eric, let me offer a friendly amendment, which might offer some hope. The metric system replaced other systems of measurement, mostly. I still mostly think in English measurements. But if we consider the idea of a lingua franca, it does something else. Consider, in order, Greek for the Mediterranean world, and then Latin until the early modern period, and English today. These languages allow far flung peoples, and especially scholars, to converse in some fashion, which great, but they do not replace local languages altogether.
As we struggle for a modus vivendi with our technologized world, maybe this isn't a bad model? But - to respond to where Erik started and then amended -- that would probably require some agreement on "neutral" or "objective" language. This is why, imho, the decision of the NYT that it was NOT the paper of record, but was attempting to foster a better nation, was significant. One could make analogies in international law, btw.
Thanks for this insight, David. I'm being simplistic and dramatic, I realize. And romantic, in my desire for a world of proper subsidiarity.
I detest the way the current technological regime was imposed and has now insinuated itself into the details of life itself, all without so much as a public conversation about it. Only the NGO classes would ever want it to be the case that owning a mobile camera phone was required to order food at an establishment. The imposition was designed to be insulated from democratic contestation, and it was nourished and protected by misguided neoliberal policies. Since I think a public conversation about it would be desirable — even if it is belated — perhaps I should conclude that some kind of common format for talking about these things would be desirable.
But I'm really not so sure, because as much intellectual pleasure as I derive from exchanging thoughts online in a forum such as this, it seems to me that the only way shit gets done is grassroots organizing and "face-to-face" conversations that move gradually up or across spheres of authority. And unlike online conversations or viral congressional grandstanding videos or manifestos, face-to-face conversations don't require a lingua franca. For you already share a grammar of concerns and interests, so to speak, due to your sharing a form of life. Or, if you don't share such a grammar, you engage in good-faith negotiation and translation of terms, made possible because neither party is simply shouting demands in the quad to nobody in particular nor issuing diktat in speeches to his loyal subjects.
I dunno. Like I said, I'm just not sure.
Mankind is obviously in a post-Babel condition, engaged in a constant struggle to control chaos and dissonance. We attempt to do that through creation of fragile connective tissue (democracy, diplomacy, human rights, economic interdependence, rule of law, ethics, institutions). That tissue is subject to recurrent infections and breakage, as we have seen time and time again. One would assume that the race to recreate a utopian, pre-Babel, chaos-free “New Tower” (that provides tool that outperform humans, and is guaranteed to“benefit all humanity!”) would strengthen mankind’s connective tissue. Yet in that very race lies the potential for tragic consequences. You’ve highlighted one of the most pernicious — the harmful impact of selective algorithms. They populate the Coliseum in the shadow of the “New Tower” — reveling in “angered and polarized responses.” Panem et circenses…. Seneca foresaw this when he described how watching people slaughtered in that arena makes spectators “more greedy, more ambitious, more dissolute, and certainly more cruel and inhuman…to consort with the crowd is harmful; there is no person who does not make some vice attractive to us, or stamp it upon us, or taint us unconsciously therewith. Certainly, the greater the mob with which we mingle, the greater the danger.” The algorithmic affliction disguises itself as cruel entertainment while it relentlessly attacks our fragile connective tissue. What happens when that tissue breaks? “When the Vandals were hammering at the gates of Hippo, Saint Augustine’s city, the groans of the dying defenders on the wall mingled with the roar of the spectators in the circus, more concerned with their daily entertainment than with even their ultimate personal safety.” (Lewis Mumford, “The History of the City,” 1961)
I'm a Mumford fan thank you for this. Let me expound later! Your Rome.comment was apt in my view.
“How do you articulate a reasoned position with “political identity” as a starting point?” - I don’t think you can, and if I’ve read you correctly, I suspect that was the rhetorical point you were making there. Politics has poisoned public discourse and I think you’re right in saying that our accelerated discourse has fuelled the process exponentially. No time to think - just react!
I’m also impressed by the identification of a missing capacity to debate and arrive at a shared, accepted truth, contingent though that may well be. Harry Frankfurt’s excellent “bijou” tract “On Bullshit” sprang to mind: his thesis was that a true bullshitter just wants to win: objective truth is unimportant. So much of where we’re at chimes with that, though of course, the road travelled to get there was somewhat different than that envisioned in the book. In the end, though, here we are in the age of the bullshitter.
I liked "Here’s what happened. Politics just swallowed religion. It just swallowed science. And it just swallowed you." and go further: Politics provides the current religion - and technology (especially for communication) and AI helps it in this and is actually part of the new religion (as endless progress making some people rich and others powerful) - though IMO AI in the sense of targetting is a bit different from LLM which we may presume to know "all we know" - in the fist case pandering to our biases and emotions and in the second, limiting us to what was already known. And so, we are trapped in what we know now and what we have degenerated to be, culturally - the human ability for breaking out of the confines of that ever so often, e.g. with a new religion, checkmated.
I don’t agree that technology is the root cause of our current day Tower of Babel situation. Who benefits from the Tower of Babel situation? There are people who are financially benefiting from it and there are people dying in the conflict zones. Technology, AI was misused for societally destructive objectives by a subset of people, that’s all. This subset of people created the smoke and mirrors communications strategy and the narrative that AI or technology is to blame, as a side show. But the main show - the daylight robbery is progressing and even accelerating.
Hi Jana, I get it. Let me look later tonight and I’d like you to be part of this discussion and am happy that you are! It’s complicated and sometimes I just feel swamped by it all. I’ll be in touch!
Thank you Erik for the addendum. That set it straight 😊LSD in the punch bowl and we are all it. I get the metaphor. In that respect, yes, it is the tech that got us “babelling”. I guess, the desire to voice whatever comes to one’s mind is in all of us (it is refreshing, and liberating, I guess)🫣 however, that kind of negates all we have been trying to achieve and achieved. I think, it is up to us to start moderating ourselves. And we will. When? Not sure, we are sipping from the punch bowl… My freedom ends where your pain begins … we are all feeling the pain.
Erik I am grateful to you for starting these conversations. It is thought provoking. Thank you.
I’m in a very loud restaurant at the moment but I think you’re onto something here. It’s the paradox I’m trying to get at, which is why I bring in examples that don’t fit an obvious puzzle. We need the granularity to return to our it crudely. I would be interested in getting these ideas out more! I’ll try to reply in lore detail later.
Very nice, Erik, but I disagree isn't quite the right word, on a few fronts. Btw, I just sent that Wierda explanation of LLMs to my faculty.
First, the turn to politics is older, at least among the sorts of elites that comprise your world. I finished my first/biggest book, City of Gold: An Apology for Global Capitalism in a Time of Discontent:
"for at least the two hundred and ten years since the French Revolution, intellectuals have sought religion in politics, including the politics of markets, and have failed to find it. Although we can and should hope to build a better polity, the City of Gold is not, and never can be, the City of God."
https://www.davidawestbrook.com/city-of-gold.html
Second, we are seeing lots of resurgence of some sort of religiosity, and its not just Peter Thiel, talk of the singularity, and the like. BLM and Gaza and Me Too and so forth are not "secular" phenomena. Will that lead to conflict? Probably. But the key point is that a notion that politics can be separated from religion is an oversimplification. Jefferson was wrong, if you will.
Third, I'm feeling if not entirely upbeat, at least a little better. I'm talking to scientists and others, including you, who are thinking both in more embodied fashion, what is it like to be human, and in more explicitly transcendent ways: what do we care about? There's a huge amount of this on substack. See Springtime, Weeds, Computer Science and Defense Policy,
https://davidawestbrook.substack.com/p/springtime-weeds-computers-and-defense
Fourth, or maybe this is just more of two and three, we live in a time of great hype. Chess computers did not replace chess. And chess computers are good at what they do. As you've written at length, probabalistic AI isn't nearly as good at what it claims to do, and seems to be at some sort of plateau. None of the CS scientists I talk to seem to disagree fundamentally. Blockchain blockchain blockchain . . . plus ca change.
Fifth and finally (for now!) I'm reworking a years long project I think you'll like, currently entitled "Quixote's Dinner Party" It's about finding meaning in Babel.
Anyway, you're not wrong, it's a question of voltage, imho. Keep up the good work!
Hi David,
I take your point about politics and religion being inseparable already, and I'm interested to have a look at your book. Let me try to shore up what I think is happening based on more praxis than theory.
We do see lots of "religiosity" today (was this Weber? What an obnoxious word), but in the straightforward "war against materialism" way, the new prize seems like it goes to Islam. Hamas seems pretty religious, to take what's in the news. And yes, my friends and colleagues who are Evangelicals or what have you still believe and take seriously core Judeo-Christian teachings and beliefs. But the LANGUAGE of Evangelicals is morphing into more the online experience, and it's morphing into more explicit political prose. As David Brooks put it, Evangelicals are quietly but somewhat rapidly becoming a political movement--politics first, politics is the discussion. Muslims still seem more intent on speaking in a religious-first way, and though of course they're political, they're also at least in the Middle East countries of which we could quickly name a few, less beaten down by a constant and perpetual online experience that's more interested in political battle than religious war (perhaps we need an edgy comedian here, they seem to get at the point more easily). If religious purity were the ideal, Evangelicals would have jumped ship from the MAGA crowd a long time ago, which of course is entirely political and ... well I'll leave it there (by my own thesis, I'm walking into a war). So in our case, the transformation of our world by technology has pushed still-True-Believers to use and engage in a more "Tower of Babel" language suitable for fighting Democrats online, or vice versa. I actually know this is more or less true, from a small but I think reliable sample anyway, because I have friends who simply abandoned having debates with Dan Dennett types and now use LLMs like everyone else. They even sometimes admonish me for thinking they won't get better in certain ways. Wow! My view is that they've shifted not entirely consciously into a different type of language that's more suitable for political battles and are notably less literal about propositional claims about what happened thousands of years ago--though of course if you ask them this--prompt them in the right context, as it were--they will produce all of these points as always.
I can't write another Ph.D. thesis to get the point across, of course, but something like that HAS happened, in my view, and quite obviously. My comment here is an observation based on what you commented on; my overall thesis I'm throwing out there is that technology has played a HUGE role in how this has transpired, and to not see this would be by my lights somewhat amazing! We can argue about details, to be sure. I'm sure I have much to learn. Thanks, David.
And by the way, David, I take your point as I understand it that politics and religion have been in every era inherently tied up with one another. I was a bit young in the 1980s, but certainly recall how in the 1990s you had a decision procedure for deciding given what someone believed about religion who'd they'd vote for, what they believed about regulation and capitalism, and all the rest. What I'm trying to get at is how it's somehow DIFFERENT today, and the difference seems to me to be quite obviously the rapid seachanges in our digital world. The point about AI is handy, because up until the last few years that was a punching bag for godless materialism, now it's what we use to find information. I that's the best I can do!
Perhaps I should start a new thread with this, but let me say one more thing here in hopes it will help. Let me switch examples in hopes my little idea will become clearer. In the mid-1990s (say), if you asked a traditional religious person (include here Christianity, Judaism, Islam but I suppose not Buddhism) and a scientific materialist whether Darwinism were true, with rare exception you'd get a nice line in the sand and a debate going. The former would insist Darwinism is an undirected process and therefore must be false because of theological reasons A,B,C and empirical evidence X,Y,Z, and the latter would say.... the opposite. It must be true. That's what would happen. Today, in the Tower of Babel world, some clever chap would chime in before the debate had a prayer (no pun), insisting that UFOs likely the planet something about panspermia, and when all that ran out of steam, take up an impassioned explanation of a "well-known" conspiracy to cover something up pertintinent to discussing the issue. After a few hours of this nonsense, the original line in the sand folks would walk away. What will they be likely, more and more to do? Something political and complaining and angry. An "F-you." Eventually the entire intellectual landscape shifts. I see this happening over and over again. We can't fix this if we don't see this clearly! And, sadly, maybe it's just not fixable anymore. I dunno.
I do not see this as true though, if you go to somewhere like mindmatters, you see traditional creationism and the like, or if panspermia is advanced, it is a way to provide a "third road out."
I don't doubt that people have felt the need to compromise for their terminal objectives - Trumpism being an example of it: you are much more likely to accept his sins if you feel drag queen hour is a bigger issue and e.g. I am willing to compromise on almost any issue for human survival.
But that seems only natural: the process of compromising for "terminal goals", which for many on the religious side may indeed be "being on Team Human."
I hear you about art long and life short! And I agree with many of your observations. A medium sized response and a bigger one. Political Islam is really, really interesting, and something I did a fair amount with back in the day. The big mind here is Olivier Roy, who coined the phrase. Some of this turns up in my books on security, one of which is expressly addressed to US responses to Islamist violence (I wouldn't recommend reading unless you want to go back down the security rabbit hole!)
But the real problem/opportunity you point to -- and have been pointing to -- is just how hard it is to talk about technology. It's amazing how much more philosophy of science we have than philosophy of technology. So AI just kind of happens, and people move on . . . it's hard to even ask the question, as I'm finding on my faculty. This will be done . . .
So without a theory, or something less, a kind of articulation, we have real difficulty talking about technology in any way much more sophisticated than "maybe it will cure cancer" or "maybe it will unemploy everyone". Some basic utilitarian speculation, some recycled alienation, that's about it . . .
Which, to be encouraging, is what I hope you are working on . . .
Well, you can count me as a semi-traditional theistic type that remains literally anti-AI. I will die and go to God, I do hope :)
Hmmm, I liked the new preface or whatever you call that! But don't think I get to "like" again.
Erik—
I have a different perspective on what you’re describing. I genuinely don’t feel that digital technology has razed the kinds of debates and big metaphysical ideas that were current in the 20th century. Rather, I think these ideas themselves have run out of steam, at least on the terms that we grew up with. They no longer give us a reasonably accurate picture of our impulses and the world we live in, and it is their inadequacies to this historical moment that have precipitated their failure. Digital technology may have exposed and hastened their demise, but it is not a primary cause. The obsession with AI reflects, but has not predicated, a kind of intellectual and spiritual exhaustion.
People turn to politics because, without these frameworks, the cultural situation is genuinely confusing, and politics is very close to the surface as a cultural phenomenon, and therefore easy to access intellectually, and easy to participate in. Unfortunately, this also means that our politics constantly threatens to become purely destructive and reactionary. Trump is a synecdoche of this problem (as I’ve written about here: https://www.arcdigital.media/p/that-you-havent-been-told), and what he offers is an imaginary escape from responsibility. I think the “polarization” argument you’re suggesting here misses the grim outcomes that will be visited upon us if Trump and his MAGA cronies take control of our institutions -- instead of normie Democrats and the vestige of “Haley Republicans” that haven’t left the GOP. The end state of Trumpism is Putinism. Any other group in power will leave our institutions in their status quo – which can be messy, confusing and dissatisfying, but where individuals are basically free to go about their business. If everything in culture is political, your average citizen will have the political sophistication of a small child, with all the tantrums and tyrannical urges that come at that age.
In several essays, I’ve discussed my theory of why the old ideas are failing (reading this material might give more context to my perspective). We’re only a few hundred years into industrialization and this change is not just an economic shift, but a change in every aspect of culture and in the way the world presents itself to human psychologically. Our main religious ideas are all agrarian in their structure and purpose, and therefore not well suited to the new forms of culture and human experience. They won’t survive prolonged contact with modernity.
If you accede to my framing, then a course of action presents itself. Come up with something different! Something that isn’t the same thing that everybody has been saying for a long time. If there’s not a gun pointed at your face, you’re not required to be spiritually exhausted by events in the world. You can choose not to be co-opted; all you have to do is wander off and try something new. Maybe you too will fail, but if you succeed, you can share what you discover with whomever is interested.
One of my favorite poems by William Carlos Williams comes from a piece out of Paterson:
Without invention nothing is well spaced,
unless the mind change, unless
the stars are new measured, according
to their relative positions, the
line will not change, the necessity
will not matriculate: unless there is
a new mind there cannot be a new
line, the old will go on
repeating itself with recurring
deadliness: without invention
nothing lies under the witch-hazel
bush, the alder does not grow from among
the hummocks margining the all
but spent channel of the old swale,
the small foot-prints
of the mice under the overhanging
tufts of the bunch-grass will not
appear: without invention the line
will never again take on its ancient
divisions when the word, a supple word,
lived in it, crumbled now to chalk.
If you want a starting place, try this:
What is consciousness, and why is it a “problem”?
I didn’t write this in a public thread, but I emailed you some thoughts that I have on this subject that I haven’t seen in academic or public discourse. You or anybody else is free to take a stab at outlining your own ideas, and we can argue about it (in the sense of intellectual debate, not rancorous mudslinging) – it’s not like academic or public discourse has been particularly successful on this topic.
For those who feel that AI may indeed be an emergent threat to human thriving, existential or otherwise, and wish that we have some way to do something about it, I would like to link PauseAI and upcoming protests here.
https://pauseai.info/2024-may
We also coordinate and may have more protests going in more cities if you wish to help:
https://discord.com/invite/dUAxJfRB
We currently have around a thousand members but have been increasing rapidly.
Have you seen the EU Parlement building? This shows that the differences of "identity" are so great that the atheists will take a symbol from a story about "bad folk" in the Bible and think its "good"marketing. Just like the Apple logo.
I think if a soulless machine has a mind, it is pretty much a death sentence to humanity. As someone on the side of humanity, I'm sure you could see why this is a bad thing.
And I felt your discussion was a bit confused but isn't it obvious that the Tower of Babel, even before AI, is harming humans increasingly as a net?
Should we see this as a good thing? I admit, as a theist and a traditional Christian, the entire soulless mind is deeply disturbing and certainly the ideas of human extinction or "mind uploading" additionally so.
Hi shon pan,
I'm no fan of b.s. like "mind uploading," and I tried to make a distinction between soul/consciousness and inference (how "smart" it can behave). I do not believe that computational systems can have minds, like we do. I see no reason to believe that, even though it seems the inferential question--how powerful they can behave--is getting gradually won by AI, because the systems are getting vastly more powerful (but not mindlike). Does it make sense? I'm not arguing for machines with minds.
But something mindlike enough can nonetheless be an existential risk, regardless if it is a "true mind." I mean, an adequately powerful virus is an existential risk, even if it has no mind, but only mindlike planning and mutating skills.
This is where I think your argument about "I should build AGI if I can" falls flat. Because that can very well be building "death of all organic life" and I think that is not a good thing :)
That's a good point. So--and I'm not being coy here--if you were an AI scientist, what would you do? The typical response from folks in the field is to try to build in safeguards, and that whole process I find a bit silly. My point in the post was counterfactual: IF I somehow had the idea of the century, I would consider myself a successful AI scientist. You raise an interesting objection to my line of thought there! I'm not sure what to say!
I suppose I would say that if we scaled or invented a general intelligence, it still wouldn't have motivations or consciousness. So in that sense, the existential risk factor wouldn't have any sci-fi to it, so to speak. It would be a matter like nuclear weapons--dangerous but not motivated to wipe us out on its own. If an AGI actually had motivations and desires, we'd be in big trouble, but to me that's almost non-sensical. It's an adding machine, made superfast and engineered for cognitive outcomes. That's the best I can do for now!
To answer your questions:
1)if I was an scientist who discovered AGI, I would do everything possible to stop it from existing. My ego isn't worth human extinction.
2) I am not sure if "motivation" as a complex entity is needed. I am glad to consider the soulless being to be mindless, but even something as surely mindless as a social engagement algorithm has a "motivation" which is to maximally promote engagement so that "it will survive" so that the company will not use another algorithm.
Likewise, AGI exists to replace humans so even if it doesnt have the motivation, by existence of its superior characteristics, it will select humans out of existence.
3) I think as with Hinton that anything with a goal function that can set subgoals can be very deadly, see also instrumental convergence.
In short, if it ever can do everything humans can, we are probably boned.
Also I would like Yoshua's idea for AGI: a purely isolated science entity with no awareness of the real world. This means, of course, absolute non-deployment outside of CERN-like entities. Narrow AI released for use after testing would be vastly safer and innately capability bound.