5 Comments
Feb 24·edited Feb 24Liked by Erik J Larson

I loved this one. I have a couple of thoughts. (Forgive me if I'm raising topics you've covered in your earlier work.)

In my demimonde — philosophy — the term "abduction" can refer to either of two things. There's the pattern of hypothesis generation that Charles Sanders Peirce identified and characterized (in a mind-bogglingly variety of ways), and there's the pattern of hypothesis-generation-cum-hypothesis-concluding, the latter being called nowadays "inference to the best explanation." Though I've become fairly familiar with Peirce's semiology because of my pet research interests, I'm not, to my satisfaction, very knowledgeable about his views on abduction.

But I teach inference to the best explanation all the time: what it looks like when it goes well, its basic structure, the conditions under which it's possible for it to go well, and what to do when those conditions don't obtain. One thing that always strikes my students is how common and ordinary such reasoning is. That noise must be the skunk digging under the porch again. The children are suddenly quiet, so they must be doing something they know they shouldn't. That puddle in the laundry room must mean the washer's gone awry again.

Human beings are constantly, and virtually effortlessly, inferring the best explanation to everyday phenomena, implicitly entertaining rival explanations and ruling them out based both on our general understanding of things and on our understanding of the context, the latter understanding often including very subtle perceptions of relevance and salience. My students can very quickly be brought to see that brute computation, no matter how massive, simply cannot manifest such humble reasoning.

What you've crucially explained so well here is that AI is almost laughably impotent when it comes to reasoning about extraordinary things, such as plane crashes and outré scientific phenomena. What I'm suggesting — regarding inference to the best explanation, for, like I said, I'm not knowledgeable about Peircean abduction — is that AI is also almost laughably impotent when it comes to reasoning about utterly quotidian things.

Expand full comment
author
Feb 24·edited Feb 24Author

Hey Eric,

Yeah... Piercian abduction is sort of all over the map. His clearest exposition I think was based on what he called a "surprising" observation, leading to "but if such-and-such were true, it would be a matter of course." But as you point out, almost everything is in some sense "surprising" in the real physical world. We tend to see generalities but it's more accurate to say that quotidian events like walking to the store are essentially sui generis. AI definitely fails at what we do ubiquitously, which is why I think AGI is still a chimera. I would not be surprised if real bona fide AGI was actually unachievable. Of course I don't have a crystal ball, so I can't say other than to point out what we're discussing here--inference.

By the way, I studied philosophy for years, and still love the field! I wish we could roll it back to the era where it was a serious investigation of core issues, or roll it forward so that it can be again. Would love your take on that, by the way.

Thanks for the comment. IBE is super important for understanding inference. It's great that you're teaching it.

Expand full comment

Thanks for the reply, and the kind words. It's clear you love philosophy, which is one reason I feel at home in the little nook you've carved out here. I love philosophy too, which is why I tease it, and why I'm exasperated by its current institutional deformation.

Expand full comment
founding
Feb 23Liked by Erik J Larson

Erik, another great article. I, too, have always been fascinated by accidents, and during the lockdown I stumbled on a documentary series on airline accidents. (Wish I could remember the name of it.)

Long ago I came to the conclusion that there is no such thing as an accident. Somebody f’ed up. Of course, I applaud any effort to make the products we use safer, but our society has gone way overboard in trying to create a risk-free environment. It’s how the government got away with abandoning one hundred years of knowledge on managing a virus and scared people into compliance with the lockdown, masking, social distancing, plastic barriers at checkout counters and vaccines.

The TSA drill we all go through at the airport does exactly what you mentioned: it gives people a false sense of security. Almost daily, a TSA employee, an FBI agent, or a U.S. Marshall forgets her loaded firearm in some U.S. airport bathroom.

People screw up. We need to get over it.

Expand full comment
author

Thanks for your comment, Alan. The f*ck up can happen at any stage, from initial design to operation. Accidents often show how we fail to think about a problem correctly, so for that reason and many others I'm interested in researching and investigating them.

Expand full comment