21 Comments
author

Colligo has the world's most thoughtful and interesting readers, no doubt. I had to blow through this post as I'm writing a proposal for a second book, and on some occasions I don't have all the notes and research and that kind of internal intensity about a post. Fortunately, I have Eric et al! Thanks for adding immeasurably to what I said. I'd like to return to this issue of utopia. All best, Erik

Expand full comment
Apr 21Liked by Erik J Larson

These posts that you "blow through" without your typical thoroughness are blessings, because they give us lurkers and commenters something to add!

Expand full comment
Apr 20·edited Apr 21Liked by Erik J Larson

You make a good point when you observe that we will likely always chase positional goods, and that a "post-work" tech utopia will likely just contain redefined positional goods.

Some thoughts about the possible political-economic circumstances of a "post-work" tech utopia:

(1) Tech needn't have become absorbed into capitalistic structures, but it has. (Failing to foresee this might have misled Marx.) And this absorption won't likely be reversed.

(2) As it metastasizes, tech necessarily exudes a massive, material matrix around it. That matrix nourishes and sustains tech. It also therefore nourishes and sustains the illusion, promulgated by tech, that tech is immaterial. There will always be a need for human beings to tend to the material framework.

(3) Given 1 and 2, it's likely that the work done to maintain the material framework will most likely be work that, by its very place in our political-economic arrangement, contributes to the production of goods valued by the market (be that production via financialization, via entrepreneurialization, etc.). That is, the work will (also) be labor.

(4) Given 3, it's worth wondering whether the same old stratifications and inequalities will reassert themselves. There will be the Haves, the Front Row, playing the market or exchanging AI art out on the veranda as hovering robots buzz around them emitting a salubrious vapor of vitamins and microdoses of psychotropics. And there will be the Have-nots, the Back Row, performing labor that is accorded no dignity by the overclasses and yet is necessary for the overclasses to do the things they tell themselves make them more important.

Expand full comment

Eric, nice points. Not sure I buy the idea that tech would avoid capture by capitalism, though. If we think about tech as the material realization of an idea ("how could I do X?") and capital as the capacity to make time and other resources to develop something that does not yet exist . . . Of course, the other real driver of tech is military ambition. How do we win/survive? Some huge percentage of CS research in the US comes through the military. I often tell my CS friends they are commodities, you know, like labor. They think they are solving problems. Which I suppose is Bostrom's biggest mistake.

Expand full comment

I'm not sure I buy it either! There's conceptual possibility and material possibility, and I was distinguishing between a post-internet tech that's conceptually possible and one that was materially possible, given the political economy such tech was actually born in.

Beyond that, I should have been more direct and more specific: I think it should be acknowledged that critiques of tech, in its post-internet incarnation, are in fact critiques of a phenomenon formed by certain logics of capitalistic extraction, especially those of data extraction. To abstract too much from how these logics constitute the phenomenon is to miss the phenomenon.

Speaking of which, I appreciate your point about military interests having made tech the way it is. I suspect it would be worthwhile to examine how the extractive and military logics come together to give tech the shape it has and will have. And none of this is yet to mention what Eisenhower at first described in his farewell address, but upon airing declined to describe, as the "military-industrial-*congressional*-complex."

Expand full comment

Yes, yes. But I think that's what I was trying to talk about in Boeing, Complexity. Engineers confuse "the problem" as it appears to engineers, as something like a math problem, floating in space. It never was that. They are hired to solve problems that people with capital of some sort want solved. So the distinction between conceptual and material possibility is kind of the distinction between angels and people, working in a social/economic space. So this, as I said to my CS buddy, is the "delusion" that engineering, as an enterprise, seems to require.

Expand full comment
author
Apr 25·edited Apr 25Author

I have one eye open right now (as it were), so I'll say this quickly, but almost everything interesting about technology passes through warfare. Start with the computer. The Internet. And with a broader lense, computation seems to be a "bureaucratic technology" in the sense that it's like a super fast abacus keeping track of a nuclear blast radius or now.... is it any wonder that we have autonomous vehicle navigation resulting in killer drones and deep learning networks resulting in 24 hour surveillance? Generative AI makes deep fakes. We always think it's a magic coincidence that our tech is form-fit to warfare, but of course it's not. If we don't see the basic human impulses behind tech--and I wonder whether the entire data-driven approach to AI can be explained this way--we miss what top down hierarchies and leaders with deep pockets are thinking and doing. Full disclosure: I've founded or co-founded two companies funded by DARPA, of course part of the DoD! Final thought, I don't have shall we say an idea about the "intrinsic" connection between defense/warfare and tech worked out in some anthropological treatise or formula. Just as a practical matter, warfare drives tech and vice versa, from gunpowder on. The computer is most certainly not an exception. One of these days I'll connect all these dots all the way back to the idea of future tech utopias and other fantasies. Reading these thoughts and comments really helps, thank you.

Expand full comment
Apr 25·edited Apr 25

I take your point: I shouldn't have brought merely conceptual possibility into the discussion. I thought it was a way to illuminate the truth (as I see it) that, materially, we have never experienced "pure" tech, but I was mistaken.

Most importantly, though, this conversation has primed my mind to discern new saliences, so I will definitely go back and re-read your piece!

Expand full comment
Apr 21Liked by Erik J Larson

“In “Violent Land: Single Men and Social Disorder From the Frontier to the Inner City” David T. Courtwright highlights the impact that social marginality has as it “reinforces single men's sense of superfluity and contributes to the risk-taking and psychology of expendability found in bachelor groups.” Imagine millions of dispossessed young men with no place in the world, yet biologically primed for deeds that will be unwelcome in Bostrom’s utopia. Paul Virilio famously said: “When you invent the ship, you invent the shipwreck.” The wreck will be more failed “male social control” and the inevitable challenge of the ordinary social restraints on aggression, violence and disorderly behavior. Marx spent too much time in the British Museum’s Round Reading Room — there aren’t enough fish and game in the world to scratch that itch. We hunt because we want to do hard things with unpredictable outcomes, not because we have idle time. His utopia is a prison, not a castle.

Expand full comment
author

Corey, I second Eric! Love to keep this going and perhaps you'd expand your thoughts on utopia as a guest post someday.

Expand full comment

Interesting idea of single men and bachelor groups 😂😇 nihilistic tendencies. Let’s just marry them off and get on with things. Jokes aside. I think this segregation tendencies point also to a solution … that societally good outcomes stem from balanced lives and joined meaningful connections between women and men. Very few of those around. Women do represent cca half of the population and these lonely men simply descend into writing nonsense … so much nonsense. And indeed, there is such a thing as spending too much time in the British Museum … knowing the measure of things … Delphic maxims. We don’t need to reinvent the wheel or get lost in the labyrinth of sophisticated and complex sophistries constructed by these lost souls, all we need to do is to call their bluff at the right time and just ignore them. Focus on AI as technology that can help us with weeding, and decreasing the prevalent use of pesticides, improving our food chain. That’s where we should start with our national policies. Focus should be on the basics… feeding ourselves right, ensuring safe infrastructure to enable free movement, and rid our genera curricula of dangerous ideological nonsense. Schools should be the workshops of humanity. Using AI technology the right way, it is a useful technology.

Expand full comment

Point well taken (even if facetiously) about marriage. A balanced population and prosperous economy equals a sustained marriage boom. Marriage and families are a healthy form of social control, and positively affect two generations (father and son). But we’re not on a trajectory towards a sustained marriage boom, are we? Fewer family ties, a failing education system, an opioid crisis, and millions of 22-55 year old males disengaging from the workforce is not trivial and can’t be ignored. The underside of history is populated by young, single men, and the patterns of asocial behavior are common across all societies. No purpose, no hope, and enforced idleness…what could possibly go wrong? I agree that if AI lives up to its marketing and can overcome its current disabilities (e.g., hallucinations, reward hacking, excessive resource consumption) the gains could be profound. Perhaps we could address both issues — AI could efficiently plant and weed lotuses, and we could feed them to our young men. As Homer wrote in the Odyssey: "Whoever tasted the lotus, its honey-sweet fruit, had no longer any wish to give news of himself, nor to return, but there he chose to abide with the lotus-eating men, ever feeding on the lotus and forgetting his homeward way." (Now I’m being facetious.)

Expand full comment

AI is just a technology … no matter what Bostrom Utopian vision would like us to believe to distract us from the dire real trajectories as you described. As far as I am concerned, it is nothing more than a distraction for a subset of the intellectually ambitious … the societally beneficial decisions about how to use the available technologies to change the unwanted trajectories (so we don’t need to pin our hopes on Homers sweet fruits…) rest upon us, humans… the same as always in our documented history. The responsibility lies with us. I am personally against being used as a free functional tester (user acceptance testing) for expensive iterations of immature AI technology … which perhaps isn’t even in TRL 7-8, yet.

Expand full comment

Interesting and well-put! This is, in my view, a most welcome perspective on this topic.

Expand full comment
Apr 20Liked by Erik J Larson

My vote, Bostrom should have stuck with the apocalypse. If that's not avoided, none of this other stuff really matters. Bostrom's strong voice on existential threats is still needed, because we are still deep, Deep, DEEP in denial.

Expand full comment
Apr 20Liked by Erik J Larson

I do think it is far more likely that AI just effectively kills us all; accelerating the process that social media and smartphones have begun.

Rat utopia led to extinction:

https://en.m.wikipedia.org/wiki/Behavioral_sink

I hope we still have the right behaviors left to resist.

Expand full comment

So Bostrom is a simplistic fool. Agreed. There is not enough in human intelligence/society that stops certain fools to become professors, presidents, billionaires.

In the end it all comes back to the limits of *human* intelligence. We can be utterly convinced of the most silly things (from 'flat earth' to 'Jewish space lasers'), our intelligence is not about 'reason' at all, it is about speed and efficiency. The hard stuff we do rather painstakingly.

In the end, the key question is if we are smart enough to accept that we are not smart and act accordingly. It doesn't seem that way.

Expand full comment

Hi!

I am new. This seems very interesting. I have written a book about the times we live in. It is an attempt to do this scientifically, that is according to the theories and methods of social science. I started with the largest possible questions and an anthropological perspective. With the latter I mean, What is Western culture and exactly what has happened to it?

You will find considerable overlap with your thoughts, even if coming from a different perspective. For example, that we are at the end of technological expansion not beginning, or the effects of affluence, complexity and boredom.

However, the main topics are what has happened with the cultural values of individualism, equality and rationality; and a central part is about what has happened in the social sciences:

https://www.amazon.com/dp/B0CXJH4THT/

In the sequel I will be looking closer at individual topics identified in the first book.

Expand full comment

You had me laughing out loud at the opening image

Expand full comment

I take the chance to peddle one of my favourite anecdotes from professional times: My (favourite) counterpart at Mercedes-engineering would say "You can buy a crystal-ball at the mall down the street". They come in various brands and types - it's a business, after all, and there are different kinds of customers - and so the utopias they produce reflect different wishful dreams (or even nightmares) of their developers - and those that find customers gain a market-share. Having said that, by definition no utopia can ever claim to be a scientific prediction based on a good model of the present and foreseeable influences that allow simulating such prediction - in the spirit of, say, Club of Rome. Utopias of egalitarian/communitarian prosperity enabling good life for everyone are the popular entry-level product since Marx, Russell, etc. - and have the benefit of distracting us from creating a prediction based on current realities (especially class-structure and geopolitics) - that might scare us to death (more along the lines of Orwell or H.G. Wells) - where all new technology only serves a select new elite of inventors, owners and users to get a one-up over their peers - and anyone below average keeps getting poorer. But then there are well-known solutions to dispense with the latter altogether when we no longer need them (which the whizz-kids at WEF have ventilated about) - so maybe brainwash them into not pro-creating or accept "renewal" as in "Logan's run" - or make them infertile by various means. What we should ACTUALLY be doing is come up with an acceptable vision of the future that can ALSO be reached by a feasible (THAT's the hard one) process from our current starting-point - and drive that process.

Expand full comment

Life isn’t a maths problem, to be solved. Life is to be lived. Anyone who suggest that we have the power to “solve” the world with the known AI … what does that even mean? Thanks Erik for drawing similarities to Marx, and I just would like to reiterate that the Russians didn’t choose to try to implement those ill-thought Marxists ideas voluntarily and nor did Czechoslovakians. Marx wasn’t Russian, Stalin wasn’t Russian either. The terror with which those ideas were implemented, the torture, the deaths and mis placement of countless of decent human beings - should have been a lesson to us all, but we haven’t learnt.

Expand full comment