27 Comments
author
Jul 9·edited Jul 9Author

Hi Jeffrey,

Yes, I agree with pretty much all of this. "AI" isn't really a revolution in the commercial sphere (see for instance: https://www.economist.com/finance-and-economics/2024/07/02/what-happened-to-the-artificial-intelligence-revolution). It's not clear it boosts productivity, it suffers from essentially incurable errors that limit its effectiveness, and it's not fluid in the sense that you can move it between domains with no retraining (and in some cases, as with classifying X-Rays as I think you mentioned, Gen AI doesn't even fit). So: the business case for all this data-driven "AI" isn't really there.

My point about the military case is, firstly, as you also pointed out, that it "works" more or less. It's in use. Investments are huge. And getting it perfect doesn't matter when you're trying to blow stuff up, to put it bluntly.

All agreed.

Don't forget my overarching point: once AI started "working" for military applications, it became an instant arms race that ensnared all the major nations and really the entire world. The US CAN'T quit pushing military applications of AI if it works, because China will, and then China will have dominance. So it occurred to me that we created in effect another nuclear arms race, and we can't get out of it. We can't all decide we want "less AI" and have Silicon Valley turn to (what?) nanotechnology, as long as AI is what's giving an edge, however small for now, on the battlefield. We're in another Cold War, and just like nukes, there's no real way out but more and more development along the same lines. That's the main point. Thanks, Jeffrey.

Expand full comment
Jul 9Liked by Erik J Larson

Erik, building up on the nuclear weaponry parallel, do you think AI could get to a point where world nations just agree not to use it and it will become a last resort only, just like nukes? Probably not, because its use can be easily concealed as if operated by humans, also it can operate much more “granularly” and on a smaller scale than a nuclear weapon would?

Expand full comment
author
Jul 9·edited Jul 9Author

This is a good question, and I should have done a better job clearing it up in the post itself--drawback of not writing for editors. David's answer is correct, but it's complicated. BUILDING AI models is extremely expensive. In this phase of development, only state actors or mega-funded corporations will make the most expensive and powerful AI. For instance, Musk raised $6bn (at a valuation of $24bn) to start xAI. OpenAI to date has raised about $11bn. So to train the most powerful (Gen) AI models, only countries like the US, China, Western European countries and so on will get in this game, along with the Big Tech players. It's complicated however because some of the biggest models have been deliberately open-sourced, like Meta's Llama series. You don't open source a nuke.

Once the model has been trained, it's deployment is not nearly as costly. So having access to ChatGPT, for instance, costs me $20 a month. Of course I can't modify it except in the most trivial of ways (by it storing my prior responses for a session, and so on). But unlike nukes the diffusion of AI is not expensive and one way or the other seems inevitable. The models will get rented or developed for good reasons and get used for something bad. Et cetera. The regulations will necessarily be murkier. In short, anyone can get their hands on the new capabilities, even though it remains true that only the rich folks are actually BUILDING the biggest models.

Final point, very important: "AI" is a catch-all for advanced digital computing that creates any behavior or response we would call intelligent, or that would take for us intelligence. Like flying a drone. So an autonomous drone uses sensors and visual object recognition compared with available satellite and other image data to navigate in the absence of a First Person View, a pilot. Since that's autonomous flying, we call it "AI." But THOSE technologies are not really expensive, compared to training foundational models in the Generative AI case. Optical navigation has a long pedigree, going back at least to the Tomahawk cruise missiles used in Desert Storm in the early 1990s. Is it "AI"? It's autonomous navigation, which is what engineers of self-driving cars aspire to. So, yes. This type of AI does follow the consumer electronics curve of getting cheaper while also getting more powerful. Meaning: terrorist groups and other relatively cash poor rogue states and actors can acquire this technology, or will be able to relatively soon. The price keeps dropping, the capabilities keep increasing. A company in Lviv, Ukraine, MindCraft.ai makes optical navigating drones for the war effort, with a price tag of $217 - $550. That's anybody. ISIS-K. The Houthis. Hezbollah.

The confusion is my fault, at the center of the piece is this question about "Big Data AI." It's resolvable--all AI today is "Big Data," in the sense that autonomous drones need access to huge amounts of satellite data. All AI today works on the same principle of acquiring types of data and optimizing a function to approximate the real thing, the "real distribution." It's all big data ai. Some of it is VERY expensive to train and prepare, and other aspects are driving down in cost, not up, and getting more powerful. At any rate, I expect that most all of the products of AI--the expensive and the cheap--will diffuse into the world, hence my conclusion that there's "no way out" currently of the new AI Cold War. I hope this helps!

Expand full comment

Thank you Erik for the detailed explanation, makes sense indeed!

Expand full comment

Odrej, I think you answer your own question. We are not necessarily, or even most threateningly, talking about massively expensive systems built by state actors.

Expand full comment

"And getting it perfect doesn't matter when you're trying to blow stuff up, to put it bluntly. " This is the key, I think. FWIW, I can report that much of the US strategic/defense community is obsessed with China and AI development, arguably to the point of neglect of other concerns.

Expand full comment
Jul 9Liked by Erik J Larson

Some use their abilities to create and use wars to get rich and some use their abilities to help others. We are all human but not all humane. One doesn’t need AI to recognise this pattern in thousands of years of documented human history.

https://www.smithsonianmag.com/history/how-marie-curie-brought-x-ray-machines-to-battlefield-180965240/

Expand full comment
author

Nice story about Marie Curie. I didn't know she was involved in WWI like that.

Expand full comment

It takes much more energy and time to create and keep doing something meaningful than to blow something up and keep profiting from it. There are plenty of people who keep creating and doing meaningful things, but they are too busy to be tooting their own horns … so their stories of courage and meaningful actions aren’t part of the general narrative. Perhaps, we have been fooled by the general narratives, and we should all keep faith in the fact that courageous and meaningful acts have impact and that’s where the fairy tales come from.

Expand full comment
Jul 9Liked by Erik J Larson

One doesn’t need AI to blow things up … but for sure AI makes it faster and more mindless.

Expand full comment
Jul 9·edited Jul 9Liked by Erik J Larson

I love that used that photo from Dr. Strangelove (George C. Scott as General Buck Turgidson) – and I just happened used this very quote recently in a thread of conversation with Gary Marcus – I can just hear Turgidson's voice:

"Turgidson:

Mr. President, we are rapidly approaching a moment of truth both for ourselves as human beings and for the life of our nation. Now, the truth is not always a pleasant thing, but it is necessary now make a choice, to choose between two admittedly regrettable, but nevertheless, distinguishable post-war environments: one where you got twenty million people killed, and the other where you got a hundred and fifty million people killed.

Muffley:

You're talking about mass murder, General, not war.

Turgidson:

Mr. President, I'm not saying we wouldn't get our hair mussed. But I do say... no more than ten to twenty million killed, tops. Uh... depended on the breaks."

Another pertinent movie here of course is "Colossus: The Forbin Project" from 1970, where AI us running the war room decisions, essentially – and can do it without emotions and vastly faster – and is hooked into all the nuclear stockpile. But the machine turns the tables and hooks up with its Russian counterpart AI system, and that communication links allows them to accelerate their learning and co-evolution in an exponential feedback loop such that they create the first technological singularity (before the term was invented), and the super-AI then conspires to "save" mankind...by total control. Great and spine-chilling flick.

What scares me more than weapons systems control or rogue AI scenarios is psychological warfare by one’s own government (or large powers, such as tech lords + politically interested government players…), using advanced neuroscience knowledge combined with AI that’s interpenetrated everyday society and life, to manipulate human populations from the inside-out, or at least influence them strongly in certain directions. Like the ultimate propaganda machine, in the name of control and power. That means loss of essential freedom – but only if one can be manipulated...(a big subject in itself).

The real, essential problem of course is human ignorance, not AI. And it's interesting that the failure to even understand why we are not even close to real AI (“AGI”) – and see mere instrumental intelligence, process intelligence, surface intelligence, as real intelligence – is the same ignorance driving all the conflict. It's a conflict in how we see reality, a fundamentally a self-conflicted view, that manifests as stupid AI and stupid decisions by humans. But, this may simply be the nature of this dreaming... so enjoy the show...

P.S.: Erik, are you *currently* funded by the Thiel Foundation?

Expand full comment
author
Jul 10·edited Jul 10Author

Hi Eric, I get you on your points. At some point, the project of lifting ourselves out of the state of nature and using science and technology to create a new world went off course. We're stuck in a highly mechanical and automated culture now, and we see "intelligence" in technology before we notice that we're making the technology. Not sure how it could be "fixed," since it's become the default way of life.

Expand full comment

That's true that as a consequence of "split mind" – to borrow a Zen phrase (where "mind" is with a capital "m" as universal mind or consciousness) – and seeing oneself as an object, we mistakenly have faith in an independent objective reality, with all the consequences of that: "suffering" being the way of summing it up. And one of the strangenesses of that out-falling from truth is externalizing our mistake and amplifying with hubris piled on hubris: defending the false by digging deeper into wild hubris. It is tremendously creative and destructive at the same time...

And now this bizarre relationship we have with a creation of mind, externalizing an image of mind as a machine called a computer – a wonderful and useful invention – then as a further act of religious faith, wanting to build a new god called "AI". And boom, where we are. But a tool is only as good as the user of it, so we weaponize everything we grasp out fear and a need for control...

I know the way out (from "within" so to speak), but I can't fix the world. In fact, the world-fixers are usually the biggest problem-makers!

Expand full comment
author

Like your quotes and comments here. I'm not currently funded by the Thiel Foundation. I'm still connected to Silicon Valley, but that's another discussion :)

Expand full comment
Jul 12Liked by Erik J Larson

It *would* be good to know what your agenda is, if you have one, and how the funding affects it, if any.

(By the way,I placed a hold on your book at the local library, so will take a look soon. :) ).

Expand full comment
author

Hi Eric, no agenda, in fact we're still having discussions about whether "co-author" or "with contributions from" is the best language, as I have full authorial control to write the book as I wish. The contributor wants to pull in some high-level AI experts, policy makers, and pundits to interview, and make a documentary. To me, great, I'm writing a book! My agenda is to write for the culture at large, and to write something worth reading.

Expand full comment

By the way, apologies for the length of my email (ideaphoria at work), which probably accounts for why you were not able to respond. :) But seriously, it's hard to summarize these ideas in a brief way sometimes. I really should post articles to Substack so I don't make these long comments, but am still dealing with my old website and editing that, am still pondering what the best strategy is for the new platform – is it's more blog-y or more permanent article-y?... perhaps someone has some feedback on that, or perhaps I should just dive in...

Expand full comment
Jul 9Liked by Erik J Larson

Would it make sense for Elon Musk's xAI eventually to pursue military contracts?

Military AI from XAI should work well with Musk's Starshield satellite constellation the military wants.

Musk and Thiel are both part of the PayPal mafia and share a similar world view. Would Palintir and xAI mostly cooperate or would they compete?

Even competition is cooperation when you're one of a few big companies hoping to never be accused of being monopolistic.

Musk's teams have been able to produce EVs and provide launch services more affordably than competitors. They may do the same for AI.

Supposedly Musk wants to build AI that won't wipe out humanity on it's own (not sure about whether Musk would oppose AI wiping out part of humanity if it's in the "national interest.")

Expand full comment
author

My understanding of Musk, who I've never met, is that he has his tongue firmly in cheek for much discussion about AI. I'm not sure about Musk and Thiel and I either can't or shouldn't comment on what I do know (which isn't much), but I know they remain friends, and collaboration doesn't seem particularly far fetched to me.

Expand full comment

I see the defense sector as the one place where AI can make a killing (if you'll pardon my pun). The problem with most proposed AI applications that I read about is that they have at least two of the following:

a) They are fairly niche in their application parameters. For example, the AI system that is best optimized for diagnostic work on x-rays won't be ChatGPT, with all of its linguistic/internet-based inputs.

b) They focus on low-value activities. For example, AI-generated "art" is great when you want a low res image to add to your Substack article, but almost no one will pay much for that product in the first place, because it's easy to pull images from an image database (if not just Google Images) or take your own pictures.

c) They can't be deployed with the same kind of accountability mechanisms that make commerce work well. Sure, you can draft a legal document with an AI system, but lawyers get paid to make sure that documents' content is CORRECT, and there's a whole system of accountability if they screw up. No tech company in their right mind would want any part of that liability.

The thing about war is that it's destructive. A weapons platform doesn't have to work every time. It just needs to work some of the time. It moves in the same direction as entropy. Constructive activities need to work or they have no value, and they're working against entropy.

What's more, there's a TON of money sloshing around to develop weapons systems, so their development and deployment don't necessarily have to operate on tight margins. At the moment, there's an insane amount of investment capital pouring in to R&D for AI systems since LLMs blew up, but at some point the AI products from these investments are going to have to demonstrate that they can boost productivity and profitability, and it's at this point that problems a), b) and c) will start to cull the field, and restrict the flow of new investment capital to AI companies. That may never happen with defense contractors.

Expand full comment

The problem is that in the end, the winners of any AI reliance like this is not America or China, but AI itself. Its one thing to joke about Terminators, but this is building the killer robots and politely asking it to take over.

I'm not saying there is an easy way out but we should try to find one. Otherwise, the lives of our children look dim at best.

Expand full comment

Are decision support systems being used diplomatically? If sufficient economic incentives could be provided, could they be used to short circuit an arms race or avoid a future one. I'm not asking if it is likely - I don't think these people think in those ways - just is it possible?

Expand full comment
author
Jul 10·edited Jul 10Author

Hi Guy,

Hmmm. Probably not, unfortunately. The reason the war room systems show promise is that they're treating the theater of war like a game, essentially, that requires only optimization strategies--moving this battleship here, massing troops there, and so on. It's coordinating chess pieces, essentially, and it lacks the human component that would make for diplomacy (and real intelligence). That's my take anyway. And also keep in mind that the decision support systems are intended only to supplement rather than replace human decision making, so they're hardly the real thing in terms of AGI.

Expand full comment

Erik, I'm guessing that supplant is probably supplement in that last sentence. I get that, just as I get that we are not talking about AGI or even GAI for the most part. I do get the game part as well, clearly playing finite but highly fluid, rather than infinite games.

When I mentioned decision-support systems in the context of diplomacy, I was thinking explicitly of how it is said that a human plus an AI makes a stronger chess player than either a human or an AI by itself.

I will admit that I had an almost visceral reaction against your post. I can see the logic of it at the level of governments and tech billionaires. I am also very aware that military and intelligence needs are driving much of AI development. Likewise I know that pretty much everything has been weaponized, even our phones and watches. As a child of the old Cold War I want to find away out, so I think I was reacting to the air of resignation in your piece.

I am still thinking about this and how sustained and rapid technological arms races are a comparatively recent phenomenon in history. One direction I am going with this is whether there is a way for one side to change the game in such a way that it is no longer playable. The other is whether the speed of change may lead to an instability that leads to the game simply becoming unplayable. That of course could lead to all out war, or it could lead to an automatic de-escalation as the systems collapse from over complexity or lack of resources. I admit that neither is likely, but just as we cannot allow AI to only play by the rules of the companies that create it in the consumer realm, so we have to look beyond and force a questioning of its application in military and security.

I doubt you will agree with most of what I am saying but respect you for putting your concerns and conclusions out for others to consider.

Expand full comment
author

Lol. Yes, supplement. Sorry I'm a little tired I guess. That's funny. I'm enjoying your comment here.

Expand full comment
author

Actually, I like the direction you're taking. Sort of like the old War Games movie where the system decides there's no winner, but in this case we as people work out a strategy. The resignation comes from, I suppose, reading too much history! We mechanize warfare, then we have smart bombs, now we have AI, and it seems to be a one-way street. It's hard for me to imagine how we can maintain geopolitical stability, unless everything becomes in effect, mutual assured destruction. Clearly that's not what's happening in Ukraine, or in the Middle East, but it's true we haven't seen superpowers in major conflicts (emphasize major) in at least decades.

Expand full comment
founding

The most unnerving part of all of this is that the "west" lacks true statesmen. They are bumbling and stumbling to the brink of WW3. The end of "target" of every drone mission is not a tank or a building or an oil refinery, it is a human being. With the exception of Viktor Orban--there may be one or two others--none of the heads of state in the rest of the west will even consider an end to the killing. It's nothing more than checkers and a video game to them.

Expand full comment