AI isn’t about general intelligence—it’s about solving the right problems. Here’s my new approach to making autonomous drones smarter in warfare—and why AGI still isn’t the point.
I worked on a proposal for a fire-and-forget missile in the early 1980s. That development effort became the Javelin missile system so effective today in Ukraine against the T72 tank. The T72 has both a unique IR signature and a distribution of armour and ordinance that makes a top-down impact of a missile most effective (https://en.wikipedia.org/wiki/T-72). Lucky for us, Russia continues to develop and manufacture this 53 year old tank. However, this demonstrates how "hard-wired" and unflexible modern fire-and-forget missles are.
2. Granting enormous power to a dumb or evil actor (human or machine) is a bad idea.
3. Computer networks and making life-or-death decisions in medicine or military are areas where AI should be limited.
Warfare has changed since the 1980s. In the middle east, the military vehicle of choice is a Toyota pickup truck. Pickup trucks are the most popular vehicle in most of the world. It is getting much harder to tell the good guys from the bad ones. Heck, humans can't avoid collateral damage or friendly fire. Do we really want something dumber than a human to make these kind of decisions?
I have similar concerns, Tom! Let me just follow up here again. The problem is, if signal jamming is effective in some percentage of cases and relatively cheap autonomous navigation equipment can prevent that, the rules of war, capital, and life will dictate that we’re going to deploy those weapons. Guaranteed. So the question is how do we do it better than our enemies. I always get in trouble when I say this, but that actually is the logic. Also, autonomous navigation tends to have huge commercial application so the research and development itself has a certain value. It’s a complicated realize! I don’t pretend to have the final answer.
I totally agree. That is why I am trying to reverse-engineer the navigational faculties of a honey bee (https://tomrearick.substack.com/p/first-flight and https://tomrearick.substack.com/p/beyond-ai). Besides a level of autonomy that no existing robot or cruise missile has today, I believe that the faculty of navigation has been exapted (re-purposed) for much human-level reasoning and problem solving we humans are so proud of.
Thank you for this, Erik! I loved this! The engineering problems faced by AI are real. The problems presented by regulators and policy makers are artificial. Basing policies affecting general population for the next decade or more on false premises such as “scientists are close to AGI” is like basing the production of wine on the Jesus formula. So, why are they doing it? It doesn’t take a genius to grasp the facts… AI isn’t a new trick in town, has been around since the 50s. Autonomous is a “misnomer” according to ISO explanatory report and refers to a high degree of automation but somehow, by saying “autonomous” rather than highly automated … we are close or reached AGI. So why? The economy is in a dire state, the environment is in a catastrophic state, the public health is not great … yet, we are innovating and adding trillion dollar value to economy and society … and basing deficit riddled budgets on these AGI assumptions. Endgame to the bottom is on. And one might have thought we reached it back in 2008 (just 19 years post the official end of the Cold War - fall of the Berlin Wall…). Shame, we can’t really blame the Russians for the financial crisis of 2008.
I loved the way you said - we never solve general problems. AGI in the public domain of policy making, budgeting for generations is a specific problem. To me, “AGI” is an abstract concept of an imaginary equally brilliant friend of a lonely persecuted scientist (Tesla, Turing, etc). It is an abstract equal of a tortured mind to fill in the loneliness and alienation of an individual from his peer group.
Yet, some entrepreneurial scientists managed to sell the potential of this abstract concept to business. A Faustian deal.
To be honest, everything to do with autonomous killer robots turns my gut. I don't want these things to exist and my fondest hope is that neither this idea nor any other will make them viable.
But, that said, talking about *anything* other than LLMs in AI is great, so please, more of this. Of course video game AI works on a similar set of premises, though the video game AI usually has perfect information. I'd like to see the state of the art advanced enough that we don't have to give our enemy AI perfect information anymore.
Thank you Erik, very interesting! (and btw great that you did an audio episode again!)
I’m wondering - isn’t it then misleading to call this “a progress of AI”? Because, and maybe that’s just my implication, it implies that “hey, we’re one solution closer to AGI”, in the sense of seeing AGI as a collection of narrow solutions that’s so vast, there’s a narrow solution for any given situation. (I believe even if we consider it to be possible to have a narrow ai for any given problem, just for the sake of argument, we’d still have the hard problem of choosing the right one for a situation, so a recipe for agi is “a vast collection of narrow ais and an agi on top” which is kinda funny). But back to my original point: if I follow what you said, that ai feels more like engineering than science, wouldn’t we have to consider it “progress” whenever an architect designs a new building…? (Yes, that parallel is flawed, but I hope it illustrates what I mean)
"There can be no question that Ai must be regulated and controlled just as powerfully as we control money. To return to my river metaphor: Ai must be canalised, channeled, sluiced, dredged, dammed and overseen.
Back to our letter C. Countries. In an age of rising populist nationalism, do we trust individual nations to use Ai honourably and safely? Think of Ai drone swarm technology for surveillance, assassinations, crowd control; think of automated weaponry of every kind. If one nation has any of it, all nations believe they have to also. As for corporations. Anything that can give them the edge that drives to more profit, more market share must be had — and nothing can offer more edge than Ai. Criminals. We shudder at what Ai can give them."
Thanks, Ondřej (I still need to find the character for the letter on your name on my keyboard),
One thing I keep stressing is that folks who like "dreamy" AI in consumer applications (like, what, TikTok?) but not "fearsome" AI like killer drones need to understand that AI is really just a set of computational tools to enable certain types of outcomes. The reason fearsome AI applications like autonomous drones keep showing up in otherwise fully commercial/civilian discussions is more complicated, but basically (a) it's a species of autonomous navigation, so work there helps self-driving car work, et cetera (b) people think it's cool for vehicles to navigate without human involvement, and most importantly (c) we're in an arms race with other countries, unfortunately, which means if we don't press our best thinking in that direction, we'll be at a disadvantage militarily. The military angle to this stuff drives people crazy, I realize, but given that AI is an effective set of tools, it's really inescapable. On the question of AGI, I don't think we can add up narrow successes as we have a conceptual distinction there. General intelligence (aka intelligence) isn't about these discreet engineering tasks. This is a good question/comment, thank you.
Hi J-P, computers are calculators. For humans that’s a mode thinking. It’s not thinking itself. So the two will always be distinct, and we build technology for ourselves. AI is always going to fit something that we have in mind. With the military, it’s fairly obvious that if you don’t have to put a person in an aircraft, have a high success at the target, you’re going to do that. I’m not a military guy, I’m a civilian, I just point out that a lot of the research and development for artificial intelligence is in fact for military purposes. So, we can stick our heads in the sand or we can acknowledge that. I think people sometimes wonder why I highlight military technologies—to call attention to it! So we can think broadly about the problem here. Thank you very much for your comment. I appreciate it.
I worked on a proposal for a fire-and-forget missile in the early 1980s. That development effort became the Javelin missile system so effective today in Ukraine against the T72 tank. The T72 has both a unique IR signature and a distribution of armour and ordinance that makes a top-down impact of a missile most effective (https://en.wikipedia.org/wiki/T-72). Lucky for us, Russia continues to develop and manufacture this 53 year old tank. However, this demonstrates how "hard-wired" and unflexible modern fire-and-forget missles are.
I stand by my assessment of AI risk(https://tomrearick.substack.com/p/understanding-ai-risk):
1. AI is like a really dumb person
2. Granting enormous power to a dumb or evil actor (human or machine) is a bad idea.
3. Computer networks and making life-or-death decisions in medicine or military are areas where AI should be limited.
Warfare has changed since the 1980s. In the middle east, the military vehicle of choice is a Toyota pickup truck. Pickup trucks are the most popular vehicle in most of the world. It is getting much harder to tell the good guys from the bad ones. Heck, humans can't avoid collateral damage or friendly fire. Do we really want something dumber than a human to make these kind of decisions?
I have similar concerns, Tom! Let me just follow up here again. The problem is, if signal jamming is effective in some percentage of cases and relatively cheap autonomous navigation equipment can prevent that, the rules of war, capital, and life will dictate that we’re going to deploy those weapons. Guaranteed. So the question is how do we do it better than our enemies. I always get in trouble when I say this, but that actually is the logic. Also, autonomous navigation tends to have huge commercial application so the research and development itself has a certain value. It’s a complicated realize! I don’t pretend to have the final answer.
I totally agree. That is why I am trying to reverse-engineer the navigational faculties of a honey bee (https://tomrearick.substack.com/p/first-flight and https://tomrearick.substack.com/p/beyond-ai). Besides a level of autonomy that no existing robot or cruise missile has today, I believe that the faculty of navigation has been exapted (re-purposed) for much human-level reasoning and problem solving we humans are so proud of.
Thanks, Tom for sharing this I appreciate it.
Thank you for this, Erik! I loved this! The engineering problems faced by AI are real. The problems presented by regulators and policy makers are artificial. Basing policies affecting general population for the next decade or more on false premises such as “scientists are close to AGI” is like basing the production of wine on the Jesus formula. So, why are they doing it? It doesn’t take a genius to grasp the facts… AI isn’t a new trick in town, has been around since the 50s. Autonomous is a “misnomer” according to ISO explanatory report and refers to a high degree of automation but somehow, by saying “autonomous” rather than highly automated … we are close or reached AGI. So why? The economy is in a dire state, the environment is in a catastrophic state, the public health is not great … yet, we are innovating and adding trillion dollar value to economy and society … and basing deficit riddled budgets on these AGI assumptions. Endgame to the bottom is on. And one might have thought we reached it back in 2008 (just 19 years post the official end of the Cold War - fall of the Berlin Wall…). Shame, we can’t really blame the Russians for the financial crisis of 2008.
I loved the way you said - we never solve general problems. AGI in the public domain of policy making, budgeting for generations is a specific problem. To me, “AGI” is an abstract concept of an imaginary equally brilliant friend of a lonely persecuted scientist (Tesla, Turing, etc). It is an abstract equal of a tortured mind to fill in the loneliness and alienation of an individual from his peer group.
Yet, some entrepreneurial scientists managed to sell the potential of this abstract concept to business. A Faustian deal.
Oh my Jana, you bite the whole issue! Much appreciated would love to get your perspective on regulation… thank you
I agree with you 100% by the way, on this point, that when we say autonomous, we’re really saying more automated. That’s actually what it is.
To be honest, everything to do with autonomous killer robots turns my gut. I don't want these things to exist and my fondest hope is that neither this idea nor any other will make them viable.
But, that said, talking about *anything* other than LLMs in AI is great, so please, more of this. Of course video game AI works on a similar set of premises, though the video game AI usually has perfect information. I'd like to see the state of the art advanced enough that we don't have to give our enemy AI perfect information anymore.
Thank you Erik, very interesting! (and btw great that you did an audio episode again!)
I’m wondering - isn’t it then misleading to call this “a progress of AI”? Because, and maybe that’s just my implication, it implies that “hey, we’re one solution closer to AGI”, in the sense of seeing AGI as a collection of narrow solutions that’s so vast, there’s a narrow solution for any given situation. (I believe even if we consider it to be possible to have a narrow ai for any given problem, just for the sake of argument, we’d still have the hard problem of choosing the right one for a situation, so a recipe for agi is “a vast collection of narrow ais and an agi on top” which is kinda funny). But back to my original point: if I follow what you said, that ai feels more like engineering than science, wouldn’t we have to consider it “progress” whenever an architect designs a new building…? (Yes, that parallel is flawed, but I hope it illustrates what I mean)
Thanks Erik. Always a pleasure to hear your insights. You might be interested to know that Stephen Fry just published a post on his Substack about AI. I'd love to hear what you think about that... https://stephenfry.substack.com/p/ai-a-means-to-an-end-or-a-means-to
"There can be no question that Ai must be regulated and controlled just as powerfully as we control money. To return to my river metaphor: Ai must be canalised, channeled, sluiced, dredged, dammed and overseen.
Back to our letter C. Countries. In an age of rising populist nationalism, do we trust individual nations to use Ai honourably and safely? Think of Ai drone swarm technology for surveillance, assassinations, crowd control; think of automated weaponry of every kind. If one nation has any of it, all nations believe they have to also. As for corporations. Anything that can give them the edge that drives to more profit, more market share must be had — and nothing can offer more edge than Ai. Criminals. We shudder at what Ai can give them."
Thanks, Ondřej (I still need to find the character for the letter on your name on my keyboard),
One thing I keep stressing is that folks who like "dreamy" AI in consumer applications (like, what, TikTok?) but not "fearsome" AI like killer drones need to understand that AI is really just a set of computational tools to enable certain types of outcomes. The reason fearsome AI applications like autonomous drones keep showing up in otherwise fully commercial/civilian discussions is more complicated, but basically (a) it's a species of autonomous navigation, so work there helps self-driving car work, et cetera (b) people think it's cool for vehicles to navigate without human involvement, and most importantly (c) we're in an arms race with other countries, unfortunately, which means if we don't press our best thinking in that direction, we'll be at a disadvantage militarily. The military angle to this stuff drives people crazy, I realize, but given that AI is an effective set of tools, it's really inescapable. On the question of AGI, I don't think we can add up narrow successes as we have a conceptual distinction there. General intelligence (aka intelligence) isn't about these discreet engineering tasks. This is a good question/comment, thank you.
Curious to know what you think about this: https://newatlas.com/ai-humanoids/ai-rl-human-thinking/
Hi J-P, computers are calculators. For humans that’s a mode thinking. It’s not thinking itself. So the two will always be distinct, and we build technology for ourselves. AI is always going to fit something that we have in mind. With the military, it’s fairly obvious that if you don’t have to put a person in an aircraft, have a high success at the target, you’re going to do that. I’m not a military guy, I’m a civilian, I just point out that a lot of the research and development for artificial intelligence is in fact for military purposes. So, we can stick our heads in the sand or we can acknowledge that. I think people sometimes wonder why I highlight military technologies—to call attention to it! So we can think broadly about the problem here. Thank you very much for your comment. I appreciate it.