9 Comments

I worked on a proposal for a fire-and-forget missile in the early 1980s. That development effort became the Javelin missile system so effective today in Ukraine against the T72 tank. The T72 has both a unique IR signature and a distribution of armour and ordinance that makes a top-down impact of a missile most effective (https://en.wikipedia.org/wiki/T-72). Lucky for us, Russia continues to develop and manufacture this 53 year old tank. However, this demonstrates how "hard-wired" and unflexible modern fire-and-forget missles are.

I stand by my assessment of AI risk(https://tomrearick.substack.com/p/understanding-ai-risk):

1. AI is like a really dumb person

2. Granting enormous power to a dumb or evil actor (human or machine) is a bad idea.

3. Computer networks and making life-or-death decisions in medicine or military are areas where AI should be limited.

Warfare has changed since the 1980s. In the middle east, the military vehicle of choice is a Toyota pickup truck. Pickup trucks are the most popular vehicle in most of the world. It is getting much harder to tell the good guys from the bad ones. Heck, humans can't avoid collateral damage or friendly fire. Do we really want something dumber than a human to make these kind of decisions?

Expand full comment
author

I have similar concerns, Tom! Let me just follow up here again. The problem is, if signal jamming is effective in some percentage of cases and relatively cheap autonomous navigation equipment can prevent that, the rules of war, capital, and life will dictate that we’re going to deploy those weapons. Guaranteed. So the question is how do we do it better than our enemies. I always get in trouble when I say this, but that actually is the logic. Also, autonomous navigation tends to have huge commercial application so the research and development itself has a certain value. It’s a complicated realize! I don’t pretend to have the final answer.

Expand full comment

I totally agree. That is why I am trying to reverse-engineer the navigational faculties of a honey bee (https://tomrearick.substack.com/p/first-flight and https://tomrearick.substack.com/p/beyond-ai). Besides a level of autonomy that no existing robot or cruise missile has today, I believe that the faculty of navigation has been exapted (re-purposed) for much human-level reasoning and problem solving we humans are so proud of.

Expand full comment
author

Thanks, Tom for sharing this I appreciate it.

Expand full comment
Sep 15Liked by Erik J Larson

Thank you for this, Erik! I loved this! The engineering problems faced by AI are real. The problems presented by regulators and policy makers are artificial. Basing policies affecting general population for the next decade or more on false premises such as “scientists are close to AGI” is like basing the production of wine on the Jesus formula. So, why are they doing it? It doesn’t take a genius to grasp the facts… AI isn’t a new trick in town, has been around since the 50s. Autonomous is a “misnomer” according to ISO explanatory report and refers to a high degree of automation but somehow, by saying “autonomous” rather than highly automated … we are close or reached AGI. So why? The economy is in a dire state, the environment is in a catastrophic state, the public health is not great … yet, we are innovating and adding trillion dollar value to economy and society … and basing deficit riddled budgets on these AGI assumptions. Endgame to the bottom is on. And one might have thought we reached it back in 2008 (just 19 years post the official end of the Cold War - fall of the Berlin Wall…). Shame, we can’t really blame the Russians for the financial crisis of 2008.

I loved the way you said - we never solve general problems. AGI in the public domain of policy making, budgeting for generations is a specific problem. To me, “AGI” is an abstract concept of an imaginary equally brilliant friend of a lonely persecuted scientist (Tesla, Turing, etc). It is an abstract equal of a tortured mind to fill in the loneliness and alienation of an individual from his peer group.

Yet, some entrepreneurial scientists managed to sell the potential of this abstract concept to business. A Faustian deal.

Expand full comment
author

Oh my Jana, you bite the whole issue! Much appreciated would love to get your perspective on regulation… thank you

Expand full comment
author

I agree with you 100% by the way, on this point, that when we say autonomous, we’re really saying more automated. That’s actually what it is.

Expand full comment
Sep 15·edited Sep 17Liked by Erik J Larson

Thanks Erik. Always a pleasure to hear your insights. You might be interested to know that Stephen Fry just published a post on his Substack about AI. I'd love to hear what you think about that... https://stephenfry.substack.com/p/ai-a-means-to-an-end-or-a-means-to

"There can be no question that Ai must be regulated and controlled just as powerfully as we control money. To return to my river metaphor: Ai must be canalised, channeled, sluiced, dredged, dammed and overseen.

Back to our letter C. Countries. In an age of rising populist nationalism, do we trust individual nations to use Ai honourably and safely? Think of Ai drone swarm technology for surveillance, assassinations, crowd control; think of automated weaponry of every kind. If one nation has any of it, all nations believe they have to also. As for corporations. Anything that can give them the edge that drives to more profit, more market share must be had — and nothing can offer more edge than Ai. Criminals. We shudder at what Ai can give them."

Expand full comment

Thank you Erik, very interesting! (and btw great that you did an audio episode again!)

I’m wondering - isn’t it then misleading to call this “a progress of AI”? Because, and maybe that’s just my implication, it implies that “hey, we’re one solution closer to AGI”, in the sense of seeing AGI as a collection of narrow solutions that’s so vast, there’s a narrow solution for any given situation. (I believe even if we consider it to be possible to have a narrow ai for any given problem, just for the sake of argument, we’d still have the hard problem of choosing the right one for a situation, so a recipe for agi is “a vast collection of narrow ais and an agi on top” which is kinda funny). But back to my original point: if I follow what you said, that ai feels more like engineering than science, wouldn’t we have to consider it “progress” whenever an architect designs a new building…? (Yes, that parallel is flawed, but I hope it illustrates what I mean)

Expand full comment