We're Stuck with Big Data "AI"
Locked in a perpetual cold--and possibly hot--war with geopolitical rivals, there's no way out of a data-driven AI race, whether it's scientifically interesting or not
Hi everyone,
I wasn’t planning on posting today, but as the hurricane seems to have passed to the east of me in Bryan, Texas (Houston got walloped), I’d like to get some thoughts out. I’m in a researching phase for a new book, incidentally, and so spend (literally) hours reading books and articles and taking notes. So, this is therapy as much as anything! Too much reading and not enough writing is whatever’s the opposite of chicken soup for the soul. So. I’m calling this a Public Service Announcement (PSA) type of post. It’s open to all.
I read The Economist, and they report on AI and often with a more global focus. They write about startups in Europe. They write about China and AI. They write about the war in Ukraine. I read these pieces, and a dreadful feeling I’ve had for years now has crystalized into a terrible recognition. AI has become like nuclear weapons. We’re in an arms race. And we—a superpower I mean—can’t get out. Science has finally found a way to start destroying itself. Let’s turn to war, and AI.
Killer Drones
It’s 2024, we’re on the battlefield outside Kyiv in Ukraine, and there’s something flying overhead. It’s a first-person view (FPV) drone with a deadly payload, buzzing lazily toward its destination—a Russian tank. No one notices it. It barely makes a sound. It strikes at the base of the rear of the turret of the tank, which detonates the tank’s own munitions. Several more tanks go up in flames. Everyone dies. Earlier, a Russian FPV drone buzzes toward Kyiv, also lethally armed. The Russian drones like dugouts, trenches, groups of soldiers. They also like civilian cities. Today it’s heading to Kyiv to cause more destruction to infrastructure, supply lines, and residential buildings after a Russian missile strikes a children’s hospital. In the panic, the drone buzzes in and drops its payload. More people die. No one saw it coming. Several more drones will show up in the days that follow, spreading the chaos and increasing the wounded and dead still more.
The weapon that hit the Okhmatdyt hospital, one of Ukraine’s major treatment centers for children today in Kyiv was a Kinzhal, a hypersonic missile. It uses “AI” for guidance and targeting. Drones, too, will soon have “AI-inside.”
Both combatants in the war in Ukraine make extensive use of drones. Russia uses drones to spread chaos and fear. They’ve been implicated in the destruction of supply lines, infrastructure like bridges and power plants, and civilian targets like residential buildings and companies.
Ukraine—as far as we know—uses drones in place of more expensive artillery for destroying legitimate military targets. They’re effective. In one week last year, FPV drones destroyed 27 Russian tanks and 101 big guns. They have drawbacks; they don’t perform well in inclement weather, for instance. But they’re cheap, and they improve at the pace of consumer electronics, not military planning. Every year they get better: longer range, larger payloads, improved targeting accuracy. And? They get more autonomous. Enter AI.
Leveling Up
Autonomous weapons are, of course, not new. High-end munitions and cruise missiles have been autonomous for decades. The new generation of autonomous weapons are made possible, largely, by data-driven AI, packed into increasingly small and cheap microchips. FPV drones—piloted by humans on the ground—are by far the most common and inexpensive drones used in the war, but they require a large supply of trained pilots and are vulnerable to “jamming,” which severs the connection between pilot and drone. Truly autonomous drones are, so far, rare and expensive, but the price keeps dropping, as AI capabilities are better understood and AI tasks like visual object recognition (that’s a tank) gets better and better. Pilots can now lock a semi-autonomous drone onto a known target, and the drone will find its target regardless of whether the original radio frequency connection gets jammed. Other AI in the works intelligently switches frequencies so the jamming is ineffective. Data-driven AI advances are leveling up a cheap consumer technology to a new phase in warfare. Where is this going?
Eric Schmidt, the former CEO of Google, is a frequent visitor to Ukraine and invests heavily in drone technology to assist a smaller and poorer Ukraine in its existential fight with Mother Russia. Ukraine should have access to cheap, effective weapons. It was invaded. But as the cost of killer drone technology drops, and the brains and cash of Silicon Valley drives its improvement, who else will find the technology attractive and within reach? Who shouldn’t have the technology, but will? It’s not difficult to glimpse a world perpetually at war, like the fictitious Oceana, Eurasia, and Eastasia of Orwell’s classic. No one ever “wins,” but the fighting, the bloodshed, and the mayhem never stop.
The best autonomous drone today is probably—predictably—America’s Switchblade 300, the Tesla of drones, with a $50,000 price tag per unit. The Ukrainian Scalpel drone, though, costs $1000. Ukraine supplants the drones with traditional munitions and artillery, enabling it to conserve the more expensive weaponry. Ukrainian drone commanders are quick to insist that drone technology is not, by itself, enough. But as killer drones keep getting cheaper and more autonomous (I’m reminded perversely of former Wired editor Chris Anderson’s 2000s paean to the digital age Free: The Future of a Radical Price), they’re destined to end up in the hands of terrorist groups, rogue nations, and other bad actors. It’s like letting arch villain Blofeld of James Bond movie fame have a crack at ruining the real world.
And it gets worse.
AI in the War Room
I had an interesting conversation a couple of years ago with a well-known investor in Silicon Valley (can’t name drop—sorry), and at one point he quipped, in our conversation about data-driven AI and its scientific value, “so, it’s basically good for surveillance and killer drones?” I smiled and nodded (why smile?), but I should have added “It’s also getting much better at decision-support in war rooms, enabling identification of thousands of targets at once, enabling nations to wage war on unprecedented scale.” How is Big Data AI so good at waging war? Turn to the war room, to how the latest AI “brains” are infiltrating decision making on the battlefield.
Decision-Support Systems Are Getting Good
AI is infiltrating the cognitive space occupied by decision makers, civilian leaders, generals, commanders and the like. The same technology that is struggling to show relevance in the civilian business world is pumping up military aspirations and capabilities big time. Potentially time-sensitive tasks like figuring out which weapon would be most useful in a particular situation can now be handled by AI. Significantly, “decision-support systems” are now available, thanks to data-driven AI. Their task? Find the best way to wage war. DeepMind’s Alpha Fold2 finds new protein shapes (the protein folding problem). Decision-support systems find optimal ways to kill more people and destroy other targets, more quickly.
Storm Clouds
In 2021, the Royal Navy contracted Amazon Web Services and Microsoft to build them a decision-support system (DSS) to coordinate disparate operations in large theaters of combat: a strike force team in Ukraine, a frigate in the Black Sea, missile systems in Turkey. In 12 weeks StormCloud was operational. A “mesh” network of sensors tracks the movements of soldiers, vehicles, and weapons. The data—shocker—is fed to a neural network “brain” that decides when missiles should be fired and where, which drones should fly to what destinations, and where soldiers should move in response to real-time changes. It worked. Well. Really well. An officer participating in the experiment referred to it as the “world’s most advanced kill chain.” Ugh.
The key idea behind DSS is simple. Humans—even skilled humans with experience—can only pick out so many targets from satellite and other data in real-time, and coordinate so many actions. DSS uses object recognition technology to find more targets more quickly, and the complexity of coordinating large operations can also be handled by similar machine learning technology, like reinforcement learning. Entire systems are springing up to automate large-scale warfare.
The Defense Advanced Research Projects Agency (DARPA) [Full disclosure: I’m a former civilian contractor with DARPA] have tried for years to make such “brains” for the war room, but the AI technology wasn’t up to snuff, until the latest wave of Deep Neural Networks that emerged commercially in about 2012. These systems now work—or at least, they can be run in realistic experiments with successful outcomes. Much of war is simply figuring out what to hit and where, and DSS automates that and enables, well, more successful killing and destruction of targets. “Who is using what” is of course classified, but the wars in Ukraine and the Middle East both have major AI footprints that will only increase.
The formula is simple:
(1) Much faster computers
(2) Fancier algorithms (the “deep” in deep neural networks, and other technologies)
(3) Orders of magnitude more accurate real-time data from sensors and other sources
Result? Data-driven AI runs the gamut from a single killer drone flying over the steppes of Ukraine to major theater of battle operations using DSS and other systems. True, ISIS is not likely to purchase some black market version of StormCloud. But rogue states like North Korea might—indeed will, as it becomes more available—and ISIS (yes, ISIS still exists, and in fact we’re working with the Taliban against them), Hamas, Hezbollah and other terrorist groups are sure to jump on autonomous killer drone technology as it becomes cheaper and more powerful.
New Cold War
We’re taking heavy fire by “war talk” at the moment, but I think the nuts and bolts of how data-driven AI is getting weaponized—how warfare is one of AI’s main successful applications—is a story that needs to be told. Old timer AI researchers will recall endless taxpayer dollars spent chasing “AI” down blind alleys in pursuit of some thinly disguised objective from a DoD program (“try to coordinate how to, errr, deliver a card to a million mothers located on three separate continents on Mother’s Day, but let’s say ‘Mother’s Day’ is variable….”). The old “AI” rarely worked, and it was hardly a real threat in terms of decision-support or any other major cognitive task for the warplanner and warfighter. Our problem today is that it does work. The science of “AI” is quotidian and boring, unless you like GPUs and sparse linear algebra. It’s big computers and big data and an old technology with some additional middle layers (Deep Neural Networks). Transformers, yes, they do handle long-range dependencies and so can be trained to generate impressive results on text and other types of data. But this “big everything” data crunching science of AI is form-fit for the military. Eisenhower’s warning in the 1950s, at the height of the Cold War, about the military-industrial complex seems rather prescient. And here’s why it all matters: there’s no way out.
There’s No Way Out
Sorry (no, I’m not depressed!). Because the boring data-driven AI works so well for killing folks, for military applications, not investing in it amounts to ceding military strength to one’s enemy. As long as China pursues data-driven AI advances to weaponry, the United States has to keep doing it as well. You don’t bring a knife to a gun fight. No one can back down. It’s like the second coming of nuclear weapons. If I can use a DSS to pick out a thousand targets in a few seconds, and you’ve got Yeoman Smith over there, who will get the edge on the battlefield? Who will win the conflict? And, who will rule the world? We’re stuck. All I can hope for as a civilian and a computer scientist is that we also keep investing, looking for a thousand flowers to bloom, so to speak, for different approaches.
We need lower-data methods to perform intelligent actions with computers. We need to think (apologies) out of the box. In another post I may talk about liquid neural networks and the possibility of decentralizing AI (this won’t solve our desire to wage war on each other, unfortunately). There is other research out there, but for the foreseeable future, I’m afraid we’re in a big data AI arms race. That’s why you’ll often find Eric Schmidt on his Gulfstream jet en route to Ukraine. That’s why Elon Musk’s SpaceX contracts with the US Air Force. Peter Thiel founded Palantir, a major big data analysis platform for military and intelligence agencies [Full disclosure: I’ve been funded by the Thiel Foundation.] I’m glad we have smart folks on “our team,” so to speak. But let’s face it: we’re in a new arms race. If you’re anxious about the future, you’re rational. Our world looks increasingly, and for good reason, darn, damned scary.
Erik J. Larson
Hi Jeffrey,
Yes, I agree with pretty much all of this. "AI" isn't really a revolution in the commercial sphere (see for instance: https://www.economist.com/finance-and-economics/2024/07/02/what-happened-to-the-artificial-intelligence-revolution). It's not clear it boosts productivity, it suffers from essentially incurable errors that limit its effectiveness, and it's not fluid in the sense that you can move it between domains with no retraining (and in some cases, as with classifying X-Rays as I think you mentioned, Gen AI doesn't even fit). So: the business case for all this data-driven "AI" isn't really there.
My point about the military case is, firstly, as you also pointed out, that it "works" more or less. It's in use. Investments are huge. And getting it perfect doesn't matter when you're trying to blow stuff up, to put it bluntly.
All agreed.
Don't forget my overarching point: once AI started "working" for military applications, it became an instant arms race that ensnared all the major nations and really the entire world. The US CAN'T quit pushing military applications of AI if it works, because China will, and then China will have dominance. So it occurred to me that we created in effect another nuclear arms race, and we can't get out of it. We can't all decide we want "less AI" and have Silicon Valley turn to (what?) nanotechnology, as long as AI is what's giving an edge, however small for now, on the battlefield. We're in another Cold War, and just like nukes, there's no real way out but more and more development along the same lines. That's the main point. Thanks, Jeffrey.
Some use their abilities to create and use wars to get rich and some use their abilities to help others. We are all human but not all humane. One doesn’t need AI to recognise this pattern in thousands of years of documented human history.
https://www.smithsonianmag.com/history/how-marie-curie-brought-x-ray-machines-to-battlefield-180965240/