30 Comments
Feb 2Liked by Erik J Larson

Plese use correct spelling: Cheat-GPT…

Expand full comment
Feb 1Liked by Erik J Larson

There are likely several reasons why you found academic philosophers to be thusly derelict. Here are a few.

(1) Academic philosophers who are interested in philosophizing about current issues (such as LLMs) are pretty much all thinking about current issues in terms of race and gender. (Why they're doing so is another story.) Hence the focus on bias.

(2) Academic philosophers, along with others in the academy who teach writing-intensive courses, are scrambling to figure out how to deal with two stubborn facts: that LLMs exist and that if they exist, students will use them. As far as I can tell, resigned acceptance is the most common attitude. Hence the felt futility of ethico-cultural arguments against LLMs.

(3) Since at least the early '70s, there's been an entire industry in academic philosophy that has been churning out papers whose arguments have their dialectical basis in the arguments and counterarguments of Thomas Nagel, David Chalmers, John Searle, Frank Jackson, Daniel Dennett, and the like. The topic has been, very broadly, the metaphysics of mindedness vs. unmindedness. These industries have a way of just . . . petering out. Academic philosophers might just be tired of talking about it. Hence the lack of enthusiasm about the big questions you gesture toward.

Expand full comment
author
Feb 3·edited Feb 3Author

Hi Eric,

I appreciate these points. Let me take them up here:

(1) That's a pretty whacky reductionism for philosophers. Of course we all know this is going on, and it no doubt does affect things. We might be better served if shoehorned into counting the number of angels on the head of a pin with the Scholastics--you didn't know it was X?!!!! For shame! Yes I don't know what to say other than that Marx's great contribution was not to economics but to the intellectually bankrupt view of history as a saga of victims and oppressors.

(2) I agree. Philosophers have been resigning themselves in mostly a grab your ankles fashion to science for most of the 20th century. One could see logical positivism this way, the Vienna Circle and all that business about functionalism in Phil Mind as so many attempts to stay relevant with the center of gravity at empirical science. I studied under David Chalmers for a year (visiting scholar) at the University of Arizona at Tucson (Go Wildcats!), in large part because I wanted to see Tucson and... wait, in large part because he offered a distinctively philosophical argument that scientists felt obliged to take seriously, more or less. Dennett dismissed it tout court, of course, but Dennett had to field questions about it everywhere he and his beard went. (By the way, I once asked Dennett if modern data-driven AI solved the "frame problem," and his response was no, but perhaps "Bayes Nets showed some promise.") He was never one to worry about sounding dismissive.

(3) I understand this well enough, but note how the implication is rather risible: they're tired of talking about the big questions which is the point of their discipline. Imagine if 1980s philosophers grew tired of Searle's Chinese Room and Grue and all the rest, and started a vigorous departmental debate about betamax versus VHS for home movie viewing. Whether the mind is material or immaterial, whether monism or dualism or something else is the really real certainly does sound tedious after a few thousand years, but the answer seems to be to find more FUNDAMENTAL questions to ask, rather than jump into gender and race politics or silly relevance-seeking forays into large language models. I take your point, of course. It's just an out of the fry pan into the fire sort of problem for them, at least by my lights.

Thanks as always Eric.

Erik

Expand full comment
Feb 3·edited Feb 3Liked by Erik J Larson

Despite the current or former jobs of some of my friends and family, I have serious doubts that philosophy as such belongs in the university. (As opposed to sufficient command of a discipline to be able to discuss its significances outside the discipline, meta statements, the original meaning of "Doctor of Philosophy in _______". One might say similar things about writing, the arts, and so forth. But one of the tendencies of the modern world is to turn everything into a credential within a bureaucratized university.

Expand full comment
Feb 10Liked by Erik J Larson

> Marx's great contribution was not to economics but to the intellectually bankrupt view of history as a saga of victims and oppressors.

I'm by no means a Marxist, and there is a lot of valid criticism that can and has been directed at Marxism, but this is as shallow and uninterested a dismissal as they get. A lot of what you bemoan stems from (sometimes valid, sometimes not) critiques of the economic determinism of orthodox marxism, for instance, the promotion of class and class struggle as the motors of historic changes, the reduction of gender relations to relations of production versus reproduction and so on. Again, there is a lot of valid that can and has been directed at those modes of thinking, but just waving it all away like that is a bit dissapointing, especially since I think that what we both feel is lacking - in your words a "challenging [of] the dominant Silicon Valley/Big Tech narrative proclaiming SkyNet futures or techno-utopias" - could profit greatly by borrowing concepts from the vast body of thinking that developed out of, in conflict with and in relation to Marxist theory.

Expand full comment
author

Hi Christian,

Is the beginning of your excellent comment hyperbole, or could you really not find a more shallow and uninteresting dismissal? I'm sure I could dig one up....

Okay, I'm not one to social media-type argue, as it were, but I'm a little unsure how Marxist theory -- whether out of or in conflict to -- should be a focal point in talking about AI. We might note, by the way, that one of the "famous" and early critics of AI, Jaron Lanier, as early as 2010 labeled the web (just before "Big Tech" became apparent, but the latter sort of co-opted the language anyway) "Digital Maoism."

"Lanier used the term "Digital Maoism" to criticize the collective intelligence or "hive mind" approach promoted by some web philosophies and platforms, especially in the early days of Web 2.0. He expressed concerns about the idea that the collective output of large groups of anonymous users (such as crowd-sourced information, wikis, and other forms of aggregate content creation) is inherently superior to the contributions of individual experts or creators."

To get started on a Marxist-centered approach, there might be some traction there, though keep in mind Lanier meant it entirely pejoratively. I suspect you'd have a different and perhaps more layered view given your comment.

We share the concern, as you correctly point out, that much of the Silicon Valley narrative is wrongheaded. It'd be nice to use "AI" tech where it augments human intelligence and so is a net positive without signing on to say, Bostrom's superintelligence worries or the typical panic that "no one will have jobs what will we do?" Declarations of tech ruining the job market is an old saw, and as the decades--centuries go on--it seems it's not the strongest card to play if worried about tech overreach. The US is creating jobs. I'm more worried about jobs in underserved nations, not high-tech ones like ours. My concerns are, also, focused on deskilling and dehumanizing consequences from tech and in particular AI, and (of course) also the imposition of shallow worldviews as somehow "deep" because they're tech-centered. NoW THAT is silly.

Thanks for the comment. I'd love to hear more Marxist-focused critique/discussion directed at the techno-optimists, utopias, and doomsayers.

Best,

Erik

Expand full comment

I agree that the debate wether AI will create or destroy more jobs is not very interesting by itself. I've read some papers on the topic, and found most of them highly speculative and sloppy in their methodology - "exposure" is just not a great metric in my opinion. But I find it very interesting to talk about how the current paradigm will shape labor. You mentioned deskilling and dehumanization, and those are my main concerns too. It is also much less speculative as we can already see that happening and can draw from Industrial history to get an idea how this came to be. To tie it Back to Marxism I think the concepts of "formal" versus "real subsumtion" (I am not a native english speaker and freely translated the german terms) of labor can be usefull here.

"Formal subsumtion" means that labor is formaly integrated in the production of capital, but capital has very little control over it. An example is the english and german cottage industry of weavers. Contractors acted as holders of capital, buying raw materials and commissioning products, which they then sold on european markets for a profit and turned into capital again by buying more raw materials and commissioning more products. This created some problems though (from the perspective of the contractors): weavers had a monopoly on production knowledge, and they would only work until they had enough to live of it. So when markets expanded contractors could try to pay less for the products to expand production, but only so much or they risked unrest in the communities they were a part of. And since weaving was a skilled job and weavers controlled access to it they could not expand the labor pool. Weavers then we're basicaly autonomous producers like Adam Smith had imagined them.

"Real subsumtion" on the other hand means the process by which autonomous producers are transformed into dependent workers by way of organizational and technological Innovation (and sometimes simple violence). Division of labor is one, as described by Smith and Charles Babbage (both highly influental for Marx, btw). Another is the use of machinery that allows the substitution of skilled labor by unskilled labor, such as, to stick with the weavers, the Waterframe and the Power Loom.

Harry Braverman uses this conceptual framework to analyze the impact of Taylorism. The core tennet of Taylorism is an extraction of production knowledge from the workers by means of precisely measured movement studies, and its transfer to - mostly, at the time - engineers, that could then derive from that the "one best way" to do the work. In practice that meant separating complex tasks down to individual movements that could then be split up among several workers. Skilled work would thus be transformed into unskilled, repetetive tasks, and a lot of those could then be substituted by machines, thereby attempting to remove workers subjectivity from the production process as much as possible.

Braverman's approach has a lot of flaws, but I think the core principles - extraction of knowledge from labor, it's embodiment in machines that would then allow increased control over workers through deskilling and integration in a more thightly coupled production process - could be applied to the current AI paradigm when updated with care. This is, interestingly enough, already a big part of how the current models are created - "Ghost Work" (Gray/Suri) like data labeling for instance is unskilled, higly repetetive work.

One thing that is different though is that Ghost Work is not as tightly controlled as tailorized industrial work, and it does not need to be. First, since the individual tasks are so simple and you only need a computer with Internet access there is no shortage of labor supply. Second, since Ghost Workers are not employees but "independent contractors" the platforms can offload all the risk to them, including withholding work or payment when speed or quality are lacking. Third, since individual tasks are paid almost nothing for it is no problem to have a task be done by multiple people and eliminate outliers. And finally, Ghost Workers compete for tasks, which are often on a time limit, so are incentivized to intensify their work, at least until their earning goals are met. So this shares some elements with the formal subsumtion of the weavers cottage industry, and some with the real subsumtion of taylorized production, and it might result in a technology that will (if successful) create even more of that type of labor.

Sorry for the long, rambling post. I hope you can find something interesting for you, though. Happy to hear your thoughts.

Best,

Christian

Expand full comment

Hi Erik,

thank you very much for your reply. I have to admit that there was a bit of hyperbole involved in my opening statement, and yes, I can also easily think of more shallow dismissals - there is no lack of examples online, as you know. I normaly do not engage with that, as I think it is mostly an unwillingnes to engage with what is being critizised, and there is not a lot to gain from entering a conversation. I've been following you here and also started to read your book (I'm still at the beginning though), and decided to start a conversation because I had the impression you are not that kind of person.

I am not familiar with Lanier, so I can not say anything of substance about how solid "Digital Maoism" is as a critique. My gut feeling is that the link is made through a superficial application of the concept of collectivism to both Wikipedia and Maoist China? The latter collectivized the farms, yes, but it was also a heavily centralized command economy with a dysfunctional incentive structure that lead to millions dying from starvation alone, and many more through state violence. I don't see how that is comparable to what the Wikipedia community does. But maybe I'm completely misjudging here. I will put Lanier on my reading list and find out.

Expand full comment
Feb 3Liked by Erik J Larson

Thanks for taking the time to respond to my comment, Erik. Always good to encounter more of your mind.

Re your 3, especially this: "[. . .] but note how the implication is rather risible: they're tired of talking about the big questions which is the point of their discipline. [. . .]":

About this, I share David's sentiment, in some measure. I'm inclined to believe that philosophy-as-currently-instituted-in-the-academy is a deformation of philosophy-the-discipline. If I start explaining myself, I won't be able to stop. Let me just say: I think the former arrangement makes it difficult to conduct the sort of explorations you're calling for. Which is unfortunate, since the big questions originate in the very real human need to feel at home in the world.

Expand full comment
Feb 1Liked by Erik J Larson

Public services aren’t free. We pay for them. We are given bad, harmful or inadequate service. It is self evident that automating this will only make things worse. So we need to stop this.

Expand full comment
Feb 1Liked by Erik J Larson

Thank you for this article. Ok so, what to do? 😊 the battle of narrative was lost. The first stage of institutionalising AI religion started. AI is being embedded into public sector decision making. I went to retrieve a record/report of my sons Sunday evening accident from paediatric A&E ward funded by my taxes and other taxpayers on Wednesday. I called in to check how to get the report. They said I have to come in. I drove, paid for parking, and happily asked for Axels report at the front reception. What ensued could be summed up as a waste of taxpayers resources, abuse of the civilisations governance achievement - legal system and Kafkaesque grotesque scene. I am proud there were no casualties … suffice to say, I was given no report. Is this what I am paying for? Is this what people are paying for? No. This is just a report. The automation of decision making (what AI really is) is badly executed, counterproductive and expensive. Question zero - should we be automating this decision making or this section of decision making is never asked. Process re-engineering never done - too expensive and time intensive and impossible because the promise of AI benefits were oversold. The deal was done. we are just executing a bad deal badly. How to stop this?

Expand full comment
author
Feb 3·edited Feb 3Author

Hi Jana,

So they told you to come in for the report, then when you arrived and requested it, they couldn't produce it? This IS bureaucracy! Such is the way of things to be sure. There's a connection between computation and bureaucracy which--once you think about it--is pretty obvious: bureaucracy is about procedure and forms and records. Computers keep the records, perform the calculations whose results go into more records, to produce and print or PDF more forms, ad infinitum. The world can only be so bureaucratic when someone has to write everything down, and anyway there's no point to keeping records unless someone can get into trouble with the authorities. A few hundred years ago--better, a few thousand--and the power of authority was pretty much how far the king could see and whatever his errand boys and scouts could report back. They'd be on foot or on horseback, and didn't have complete or any maps for many areas, and were just as likely as anyone else to be fallen upon by starvation, malaria, or thieves. Why keep records when there's nothing to enforce? Or at any rate, the record keeping would be more rational and less sweeping and Kafkaesque, as you aptly put it.

I would love to do more on this topic of bureaucracy, because as with Big Tech, bureaucracy seems to have been accepted without much of a fight. We don't write books about it anymore (compare to the 1960s and 70s when books like The Peter Principle, Bureaucrats: How to Annoy Them, and the hifalutin A General Theory of Bureaucracy were popular and taken seriously as a social ill). Today, we've a much bigger dose of that social ill, and much less if any resistance to it. Ours is an age of conformity, if not cowards (pretty harsh, I know). The problem is: the world seems so finished, as it were, so complete and unapproachable or modifiable or reformable that the mood of defeatism and conformism is almost rational. To paraphrase that wonderful chain smoking singer John Mellencamp: "I fight bureaucracy, bureaucracy always wins."

Thanks for your comment, Jana.

Erik

Expand full comment

I agree with you. It is a bigger dose, and also because of the state of the economy, people use technology as another form of escape. Just like alcohol or any other addictive and harmful substance and activity.

Expand full comment

Yes, Erik. I called in - a painful call, first automated voice message of Chelsea and Westminster hospital, then switchboard, then A&E and then paediatric A&E. The lady (foolishly I didn’t take her name) at the paediatric A&E told me - that they do give reports but only if I come in person, they can’t email it to me. So I asked naively, do I need a passport or any other ID document so you can ID me potentially even birth certificate (My son has a different surname.)? She happily said no. So I made my way to the hospital. Paediatric A&E was surprisingly half empty just after lunch … now, i think my “misfortune” was that a tall doctor sitting at the reception desk became overactive - upon stating my business, he triumphantly declares “no, this can’t be done” you have to email “PAWS” they are the only ones who can get you the report. Handed me a leaflet on acute burns (my son had a head injury, as he fell by the pool), and circled the email address for the feedback service. 😳😳😳at this point, I am still hopeful. “But I called, and was told to come in.” Now he gets combatant and says that it is a legal requirement. Ok… so I go, what legal requirement. He remains vague on this point until the bitter end (“just legal requirement”). He promptly prints my sons reports, reads it. Then one of the nurses or doctors, a woman, comes to the reception, and as I lost my cool, he keeps repeating “legal requirement” - she nonchalantly says “yes, we can give out shortened reports”. (I guess the woman I spoke with on the phone). Now he springs into action and runs to the medical records office - leaves paediatric a&e and disappears for few minutes. No one else wants to get involved. He comes back with another leaflet - this time “medical records department”contact details - they have a legal obligation to provide records within 30

Days. I needed it for a further specialist referral. Now… he has the report in his hand, doesn’t want to give it to me, and apologies for the confusion. I leave. I think - this man - whatever his problem - at one point, throughout all this, he walks over to my side of the reception desk, leanest against the reception desk, stands too close to me, and starts staring into my face, as I am talking to his colleagues, I just moved away two steps to the right … I am not sure this is bureaucracy … I think I bring the best in people and their faith in legal requirements. 😂😊 is this why we need paediatric doctors on duty? I think what started as a bureaucracy problem has now added a people problem, too. Why was he so passionate about not giving me the report?…. Shouldn’t he be rather attending to the patients in the waiting room? I think this was more than just nonsensical walk down the Alice in wonderland logical tree of bureaucracy… this is a reflection of how some people became fanatical. No one bothered to identify my identity. All I needed was my sons name and date of birth… that I dictated to them.

Expand full comment

They say a fanatic always hides a hidden fear… fanatics with hidden fears in positions of power over others across all layers of public services together with a bureaucracy problem - a total disaster. I think we should not discard the importance of those who defend bureaucracy. Bureaucracy is not a living object … it is only made alive by people, or machines controlled by people. It is the people.

Expand full comment

I left thinking… poor patients being treated by that doctor! I would say there was clearly something badly wrong with him!

Expand full comment

I would regard technology, bureaucracy, finance, and military force as entertwined aspects of the contemporary that simultaneously seem inescapable, sometimes beneficial, yet often deeply inhumane. So the broader questions (you want big questions for philosophers, Erik!) is how do we humanize the contemporary? Or at least make our piece with our worlds? Most of my books are efforts (extended essays?) in these problems. With regard to bureaucracy in particular, you might like: Mark Maguire (an Irish anthropologist) and Westbrook, Getting Through Security: Counterterrorism, Bureaucracy, and a Sense of the Modern.

https://www.davidawestbrook.com/getting-through-security.html

Even if you don't like the philosophy, the stories are fantastic. We reacquired the audio rights, and I'm currently trying to turn the text into an inexpensive audiobook, in an effort to get outside the academy. Hope this helps;

Expand full comment
author

Hi David,

Refreshing that you and your co-author are addressing bureaucracy, the ignored thorn in everyone's side. Looks really interesting and I'd encourage readers to have a look: https://www.davidawestbrook.com/getting-through-security.html

Expand full comment

Gary, very nicely done. You touch on a lot. I hope you continue to develop this line of reasoning. This is very good, and has long puzzled me: "Unlike discussion about, say, nuclear energy or weapons, no one at the table—academics mostly, many from my alma mater The University of Texas at Austin—came even close to suggesting we shouldn’t use LLMs, or that they might represent some sort of event horizon where AI becomes a permanent ubiquitous feature of society. The acceptance of AI is at an all-time high." One would have to add the transformation of entire economies (the US, Germany) in order to address climate change. But when it comes to AI, we are asked to adapt, conform, submit to what will be done to us. Why? I suspect, in part, because "humanism" isn't a very good counterpart to "technology." Humans have always had technologies. But we don't really have good ways of thinking (I mean something broader than "theory") about technology as such. We commonly say a few silly things about "tools" and maybe "Luddites" but it's not very deep, not very round. Progress is assumed but not thought. This is a lot of what Matt Crawford is getting at, in my reading.

When a new technology emerges, gene splicing, social media, LLMs, we tend to speculate what this technology might do to the status quo, and then run an imaginary cost/benefit analysis, utilitarianism in dreamland. It never works, and the technology is imposed. The world changes. (There is something deeply authoritarian here.) Those who stand to make money hype the social benefits of their enterprises. Technocapitalism requires transformation; transformation is "good" for systemic reasons. Tautology. Meanwhile, as Ted Goia noted the other day on his substack, the Tech Lords all have doomsday plans: they are building a future they don't like.

But in light of substantive uncertainty and political division, it is difficult to imagine a society sufficiently cohesive to decide on what a humane technological base would look like.

Expand full comment
author
Feb 3·edited Feb 3Author

This is great David. In my response to Jeffrey above, I was looking for a term to describe what a "counterculture" thinks it's "countering." It's invariably what you've called technocapitalism. Technocapitalism is the panacea that always also the problem. A bit trite, but: technology makes money, and money makes technology. When this gets institutionalized as in our financial system today and certainly in Big Tech, the society simply MUST find a countercultural voice as a ballast, or everyone's a jock in highschool and the nerds and the stoners get stuffed in lockers. (Four years of this?!!! How about indefinitely.)

I think the 1960s and, again, Stewart Brand (he helped start Wired magazine and was an early visionary of what we now call the world wide web, and much else) are instructive. Brand was happy to co-opt the futuristic tech coming out of the academic-industrial-military complex. His Whole Earth Catalog is still delightful: leather moccasins and organic farming on one page, and walkie talkies and electronic synthesizers on the other. The Catalog announced that tech ought to empower and not centralize and surveil and manipulate. And it showed how. It layed out a new vision of the future.

How this looks today, I don't know. There's something like a complete capture today by technocapitalism (this term is better than "technoscience," since science--true science--is increasingly a casualty of technology and finance). No one really goes without a smartphone in these enlightened times, much less a computer and Google and social media, and now LLMs. There's a white flag thing going on here, there's a surrender to a way of life that is altogether easier and consumerist and in many ways better. I used to write on pad and paper, now I don't. It's better to write on a laptop. I think better on a laptop these days (though I still take handwritten notes). I spend a lot of time in my house in Texas by myself, working or reading or watching Netflix. How convenient that I have a smartphone--with free long distance! (Remember those days?) So what would the Lanier's and the Carr's and the Keen's offer us today? What would we really give up? There's a sense in which the world we inhabit is the world we chose, though I'm acutely aware of having no viable options. At times I commiserate with an old friend up in Seattle about getting some acres in Montana or Alaska and solar powering everything and living off the land. It's childish, really, but not impossible I suppose. Maybe I'll really do it someday, but not before I ensure there's cell coverage and fast internet. Can't have a counterculture statement without that.

Really, there's no off-the-grid unless you're really radical about it (in which case you just disappear--hence "off the grid"). "Counterculture" books questioning our bureaucratic corporate cultural monolith, our modern Leviathan, get read in Starbucks on a lazy Sunday and disappear from sight and mind by Monday morning, when there's real work to do. We sort of "lost," in other words. Jeffrey's right: critical discourse does feel a bit like the grouch. All I can say here is make the critical discourse have more bite and less whine, I suppose, for if we abandon it altogether, what resources do we draw on to affect real change? Hence the philosophers talking about bias in LLMs. (Not to pick on philosophers, I have an MA in Philosophy, back when it meant something, and my Ph.D. was a hybrid that included philosophy as well as computer science and linguistics).

Thanks, David.

Expand full comment

Stewart Brand! I grew up with the Whole Earth Catalog! So yeah, that ethos was different, wasn't it. And maybe that's an example with which one might start thinking.

Expand full comment
Feb 3Liked by Erik J Larson

"[. . .] not very deep, not very round. [. . .]"

Love that.

Expand full comment

As I wrote to you previously, there isn't a competing research agenda that has something new and vital to say about the world. What kind of results would come from a bold humanistic intellectual revival? What would be the focus of study? What new methods would be deployed? What new technology or cultural development could we look potentially look forward to? If the research doesn't have a business model consuming billions in capital every year, how would it capture popular attention? What is coming out of humanities departments these days, as far as I can see, doesn't meet this challenge.

"Critique discourse" only gets you so far, and then it just makes you seem grouchy and disagreeable -- even where the critique lands. And it's a shame, because there's so much to discover.

Expand full comment
author
Feb 3·edited Feb 3Author

Hi Jeffrey,

Of course I agree, and I understand you've articulated this before, and quite well. What I'm pointing out in this piece isn't particularly practicable, it's something that sort of happens in cultures from time to time, typically as a corrective to "progress," and it's more likely when there's a rich intellectual tradition and a respect for ideas. This jumps to mind: Jane Jacobs in "The Death and Life of Great American Cities" vs. former head of Google Eric Schmidt's vision of "smart cities." One is human-centered, the other is optimizing something say, "above" humans, the "whole," a utility function, profit.

I've been reading Fiona McCarthy's mostly enjoyable biography of Byron, and to prepare for writing about AI, a few years ago I took up Richard Holmes's The Pursuit, his bio of Percy Shelley, and some of his bio of Coleridge. Here's a "counterculture," or a recognizable movement not aimed at governance and profit and "progress."

What we now refer to as "the Romantic tradition" was really an amalgam of voices, and we can still say if a bit simplistically: reacting to the dual juggernauts of that era, the Industrial Revolution (just barely underway--but note that Byron actually gave a somewhat forgettable speech in the House of Lords on the Luddite movement. What clearer signpost of an "Industrial" problem already) and of course the shift from Aristotelian to Newtonian physics, and generally a move from natural philosophy to what we now call empirical science.

The "Romantics" were reacting to the quickening of the world, the arrival of new powers and new ways of organizing power, and what they saw as a creeping ugliness. Britain and Western Europe at this time were turning toward what Hannah Arendt would later call "homo faber," or "man the maker," and re-orienting the world from life as it's lived to life as it might be, a stance we now take as entirely normal (mindfulness apps notwithstanding). The ragtag poets and radicals we now call Romantics were immersed in the natural present, in nature, in timeless issues, in questioning authority and the received view of technoscientific progress, and for some like Shelley in endless obsessions with radicalizing norms to expand human possibilities. Government and scientists and industry dismissed or threatened to imprison "romantics," which is why we see them now as a counterculture, or at least as a recognizable intellectual movement, with philosophical precursors in German thinkers like Herder and Hamann and that Italian professor of rhetoric, Vico. Should we measure this, their impact?

We're still reading them, and our view of nature when we're not bent on turning it into a--what did Heidegger call it?--a standing reserve has been passed along into modernity from romantics like Wordsworth, Coleridge, and others. THESE movements do in fact happen, whether there's a quantifiable result or not. We might argue that atheistic existentialism or the French existentialists were another counterculture to begin, though they became mainstream in philosophy departments until they petered out in the late 20th. I've never been a Sartrian, though my experience of the great wars and the ensuing cold war culture have been greatly influenced by reading this reactive genre.

I think it's healthy for some percentage of "the creative class" (ugh) to constantly be challenging the standing reserve folks, if not for outcomes like lifespan or curing disease or fuel efficiency, then for the more elusive and probably more important and at any rate implacable question of human happiness and worth. Hence when Carr wrote his "critique du discours" and ended it with poetry (in The Glass Cage), I doubt he was much worried about starting a research program (and he must have even known that it would affect his sales--aren't there more people into AI than poetry?).

What I'm trying to say here is that a healthy culture should produce Byron's and Shelley's and Coleridge's and even Beatniks and even (swallow hard) some hippies--especially if they make cultural contributions in books or art or music or what have you. Certainly, we could do with the equivalent of a Stewart Brand today and some new Whole Earth Catalogue with CB radios instead of smartphones that collect our data and track our movements. How many of today's technologies actually make you more independent? This felt need that we're getting screwed by our plunge into modernity is what gives rise to the counterculture. A pity--I say again--that ours seems to have evaporated in so many tweets and disparate musings. How un-American of me to suggest that perhaps our intellectuals could do with a larger dose of courage and even whispers of revolutionary change than fussing about bias in LLMs.

Hey, thanks Jeffrey, I know you agree with a lot of this, and I sort of used your comment as a sounding board!

All my best,

Erik

Expand full comment

It’s interesting that you gravitate towards the Romantics. They may have been radical but they were not exactly “rag-tag.” Byron, after all, was a lord, and possibly the biggest celebrity of his generation! One has to be a member of the elite, either by birth or by circumstance, to genuinely rebel against elite mores. They produced some great poetry, but the famous poets of that era were far more immersed in the arc of European history than we are (or should be). I take my cues from the American modernists – Stevens, Williams, Cummings, Bishop, Frost, Eliot, etc. For the modernist, Old World history has been blasted apart, and the artist is tasked with finding a way forward in the New World with his aesthetic judgement and by repurposing whatever materials he finds laying around. If we’re just talking about poetry, this program was left unfinished; most poetry written by anyone born after WWI is minor and confessional or just plain bad (and confessional), and, in my view, that’s because there should been a turn towards epic in the following generations, and no one with sufficient talent took up the flag. Rap has been the most innovative genre in the last 50 years or so, and, as a form, it has real expressive limitations. (By the way, I’ve posted a few of my poems on Substack – if you’re interested in poetry, this might interest you too.)

I don’t think anyone has an idea what to react against in the form of a counterculture because we’re not fully conscious of the paradigm we’re living in. We have long passed the point where “modernity” can be contested. Instead, since WWII, all of the sciences have oriented themselves around the idea that “information” is at the heart of what structures the universe. Where this is explicit, the fact that this is a new, radical, post-modern concept of the world isn’t much interrogated, and where it is implicit, everyone acts like the idea is self-evident. The problem with this concept of “information” is that it is hollow at its core. A cipher is nonsense unless it can be mapped onto the world, and the process of “mapping” isn’t addressed substantively by “information” alone; moreover, there’s scant evidence that representations in general are something other than projections of our own cognition. Artificial intelligence is the true apotheosis of this idea: I love your book because it blows a hole in the center of the construct’s appeal. All that seems to be left over from the hegemony of “information” is politics, and we can all see what that has done to us.

The next intellectual revolution will be a spiritual revolution. But it MUST have technical consequences and institutional ramifications, if it’s going to mean anything and help guide us. I’ve become fascinated with the electrical grid, not just because of my profession, but because it’s a synecdoche for the big conundrum facing our civilization. The grid is a physical system overlaid by an information system, but it is the qualities of the physical system, as they manifest in our everyday lives, that really matter. Specifically, we want reliable, accessible, high-quality power at a low cost that generates minimal negative externalities. Producing better qualities in the grid is a massive, instantaneous, never-ending coordination problem married to a long-term, partly aesthetic investment problem, and this is exactly what we’re up against in general if a global industrial civilization is going to sustain itself.

I’m not sure if we’re quite on the same page about countercultures. I’m less concerned about what is mainstream and what isn’t, than I am about the qualities of the world that I encounter; and I recognize that this is largely a matter of taste. Here’s a good example: the obsession with Taylor Swift, who is not a great musical talent, is annoying, but it’s a symptom, not a cause of contemporary culture’s shallowness. So, sure, let’s be more vocal about interesting subcultures and artists who are obscure and vastly more talented, and be confident in cultivating a taste for material that is spiritually broader, deeper, richer. But this can happen in any case, irrespective of Taylor Swift and the mediocre taste of the people who enjoy her music.

Expand full comment
author

You're a wealth of knowledge Jeffrey, I'd love it if you'd consider guest posting sometime. Thanks for your insight here: "I don’t think anyone has an idea what to react against in the form of a counterculture because we’re not fully conscious of the paradigm we’re living in." I completely agree. We're not hitting "paydirt" somehow, and of course every age has similar gripes, but it seems reasonable to call the information technology revolution--the "liberation" of bits rather than atoms--something of a sea change that we haven't fully come to grips with as a society, as public intellectuals, as anyone and no one in particular. I'm old enough to FEEL the sense of displacement. I'm sure Model-T's had this effect in the early 20th century and electricity and steel in the 19th, but there's something particularly disorienting about "AI" and the "web" as it's in some sense not even real. It's digits. Computers are, after all, quite literally binary addition machines. And all THIS happened from binary addition?!!!

I'll check out your poetry, though I confess I'm terrible at understanding what's good or not. I read Shelley or Byron and think "huh?" and then some sentimental hack and applaud. Perhaps your stuff can open a window.

Expand full comment

I haven't posted a lot of poems, but before you read these, I'd recommend you read a few other things I've written. They might give some tools to better understand why I think poetry is important and why I've been interested in writing poetry over the years.

I posted on Substack a short piece excerpted from a longer thing I wrote that mingles poetry and similar short-form writing. https://jeffreyquackenbush.substack.com/p/excerpt-from-mt-jasper

It's not really an essay -- in form it's more like something Williams might have written if he had had interests more like mine or a flight of miniaturized disquisitive irony such as Nietzsche pioneered. I tried to nail down why I think AI is mostly hype. The irony takes a harder edge if you know anything information theory.

Last spring I wrote a piece that lays out, in a condensed form, my view of what poetry is and how it fundamentally works:

https://www.arcdigital.media/p/attack-of-the-emotional-soccerball

It's a more technical underpinning of the first piece (although I don't make the connection explicit). As a student of Peirce, the interesting thing from this essay for you would be my theory of the symbol. I hypothesize that a "symbol" is kind of complex sign where a known form and meaning is substituted for an unknown form and meaning, with the result that the known and unknown are blended cognitively. Poetry is an aestheticized use of linguistic symbols that can potentially exploit the full range of linguistic forms (from rhythm to paralanguage to grammar to semantics).

This expands the analysis I did the previous year on tropes, which explores a similar concept for tropes through an exploration of The Big Lebowski:

https://www.arcdigital.media/p/where-were-you-when-bobby-kennedy

Expand full comment

If I wanted to become the next LeCun, Hinton, etc. today, I would go into non-digital technology as an AI paradigm (one that actually has a chance of growing into AGI), not current Generative AI. I know of a few researchers who are looking at that and very early stages (a it like the early deep learning pioneers that birthed the current GenAI wave).

One of the problems of there not being an effective counterculture, I guess, is that we ignore the 'technical' side of human intelligence when talking about the artificial kind, both as a form of intelligence to study and as a (limited) intelligence when discussing the intermittent AI hypes. The purely humanistic critiques are vulnerable to (current) AGI-proponents ("we can do that with current (neural net) technology, when we scale, you just wait"). The question is not *if* the technique of the day cannot become AGI, but *why*.

I happen to think, by the way, that consciousness is not a real problem at all (it seems pretty clear to me (a.k.a. there are signs and in a human intelligence way have (physically, with my neurons) formed a 'conviction', which then steers my observations again) that it is a brain function with its own set of neurons, behavioural role and particular evolutionary advantage, and it seems also clear it is not an on/off thing, but something that is gradual when looking at different species, with also variation in the ratio between amount of consciousness and non-consciousness neurons), it is just that a 'detectable' consciousness requires computing power that is many orders of magnitude larger than what we can do with digital technology.

Expand full comment
author
Feb 3·edited Feb 3Author

I completely agree with your point here Gerben that we should study the "technical" side of human intelligence rather than assume it's a weaker version of computational intelligence. The 21st century's contribution to human intelligence is mostly a list of cognitive biases and books like Noise: A Flaw in Human Judgement by Kahneman, Sibony, and Sunstein. It's as if the culture is telling us that we're computers made of meat and the silicon variety is obviously better. Pointing to differences between human thinking and calculation would get us a long way into my whimsical "counterculture." There's a wonderful book by a German, Gerd Gigerenzer, Gut Feelings: The Intelligence of the Unconscious. Gigerenzer has done a wonderful job of explaining how MORE data is often unhelpful and how what someone like Kahneman would call a case of "cognitive bias" is often just a response to a poorly or prejudicial way of phrasing a question. We're not actually bad at logical reasoning, though we are bad at math, which is why we have calculators. But should we be using calculators for what we're good at? Should we be using them in replacement? I think this is one promising path forward, to encourage and undertake such research.

Expand full comment

There are long-standing gaps in theories of cognition, and none of the current research about the workings of the brain really resolve them -- at least that I'm aware of. Crucially, representations correspond, in part, to the *form* of the things they represent *in form*. In the visual arts, the representational correspondence of visual forms is obvious. A painting of the Eiffel Tower *looks like* the Eiffel Tower if you were standing in front of it, even if the correspondence is abstracted or contorted. With language, correspondence depends on the parsing of time and action at different levels of linguistic organization from rhythm to high-level discourse structures. Animal cognitive capabilities would suggest that neural activity is, in fact, far better and deeper at generating such formal correspondences than these more expressive systems devised by humans: animal subjectivity appears to generate its own complete virtual world. But what corresponds to what? AI researchers and neuroscientists alike address this with hand-waving, not with a theory about how a mechanism of formal correspondence could work and what that would mean more generally. The "form" part of "information" has been entirely neglected in information science and everything this intellectual movement has influenced.

Expand full comment