Accidents, Abduction, and the Devil
Accidents are mysteries, not puzzles. Computational thinking can make them worse.
Hi Colligo readers,
I’m taking a week break from The Big Chill series (aka the “we’re f*cked” series). It’s a popular series, as evidenced by reader responses, and I assure you I’ll be writing the third installment soon. Here I’m interested in resuscitating another piece I did for The Atlantic, on accidents. I wrote it in a Starbucks in the Spring of 2019 (if you read my lament about writing the Myth stuck in a soccer mom Starbucks—it’s the same Starbucks), when the fire that broke out in the roof space of the Notre-Dame de Paris was front page news, emblazoned (no pun) on the print editions of the New York Times and the Wall Street Journal. (I had formed the habit of plucking one of the papers—no one seemed to buy them, ever, and Starbucks soon stopped carrying them—and rifling through it like a Magpie, I guess still nostalgic for newspapers and more than a little tired of being face-buried in my laptop.) I don’t know what others saw at the time, but I saw accidents.
Accident theory has always interested me. My father is a retired airline pilot, and when I was younger I would pepper him with questions about aviation catastrophes and why they happened. The subject is fascinating to me: how do accidents happen, and how might we avoid them, and—more philosophically—are some accidents doomed to happen, despite our best efforts at preventing them?
Another story was in the news then, or had been recently—the crashes of two Boeing 737 Max 8 aircraft, carrying passengers, killing all on board. It seemed like a good piece to write, and so I wrote it:
Accidents are part of life. So are catastrophes. Two of Boeing’s new 737 Max 8 jetliners, arguably the most modern of modern aircraft, crashed in the space of less than five months. A cathedral whose construction started in the 12th century burned before our eyes, despite explicit fire-safety procedures and the presence of an on-site firefighter and a security agent. If Notre-Dame stood for so many centuries, why did safeguards unavailable to prior generations fail? How did modernizing the venerable Boeing 737 result in two horrific crashes, even as, on average, air travel is safer than ever before?
Beyond the obvious—I saw a piece for The Atlantic jump right out at me—I got interested in the catastrophes because they both involved a high degree of technical scrutiny by experts, who had signed off on the modifications and upgrades to the venerable 737, and who had put in place the fire prevention procedures and systems in the Notre-Dame. Million dollar aircraft and centuries old cathedrals alike, they are not supposed to go up in flames during normal operation or visiting hours. We had failed. But how?
Bring in technical expertise, millions of dollars of investment, high visibility oversight, endless testing, and still end up in flames. Really? There’s the philosophical question of how much control we really have, and how much the gods do as they wish—or the devil—regardless of our best efforts and intentions. At root, accidents are an age old question about survival. And so we have accident theory, a field of study I stumbled upon years ago in an airport, reading Laurence Gonzales’s Deep Survival: Who Lives, Who Dies, and Why. It was Gonzales who introduced me to accident theory, and in particular to one of its luminaries, Charles Perrow. Perrow was the cornerstone of The Atlantic piece:
These are questions for investigators and committees. They are also fodder for accident theorists. Take Charles Perrow, a sociologist who published an account of accidents occurring in human-machine systems in 1984. Now something of a cult classic, Normal Accidents made a case for the obvious: Accidents happen. What he meant is that they must happen. Worse, according to Perrow, a humbling cautionary tale lurks in complicated systems: Our very attempts to stave off disaster by introducing safety systems ultimately increase the overall complexity of the systems, ensuring that some unpredictable outcome will rear its ugly head no matter what. Complicated human-machine systems might surprise us with outcomes more favorable than we have any reason to expect. They also might shock us with catastrophe.
When disaster strikes, past experience has conditioned the public to assume that hardware upgrades or software patches will solve the underlying problem. This indomitable faith in technology is hard to challenge—what else solves complicated problems? But sometimes our attempts to banish accidents make things worse.
Ay, there’s the rub. Why do the very plans we make sometimes cause the horrible outcomes they are put in place to prevent? This is a pull the string and unravel the sweater kind of question. The more you think about it, the deeper and more vexing it becomes. And here we should say it outright: accidents are typically not puzzles but mysteries, and both preventing and understanding them when they do happen requires cognitive resources beyond linear thinking or computational methods. People show up as Sherlock Holmes, as sleuths, and figure it out.
In The Techno-Human Condition, a 2013 book by Brandon R. Allenby and Daniel Sarewitz, a “Level 1” technology is a tool or technique that doesn’t significantly change the environment or the “human condition.” Level 1 tech extends our powers without altering us, basically, and there’s a direct relationship between the tool and its effect. Using and understanding such technologies is largely a process of linear thinking, of stepping through its use and the implications of its use. A shovel is a good example. But a jetliner can be one too, if we think it’s a big thing locked into cause and effect relationships that can be broken down and understood (yes, it can, but keep reading). Linear thinking is sequential: this happens, and this happens, and this happens. It’s good for engineering things, and it’s good for thinking about things that have been engineered. Computers excel at linear thinking (in a very real sense, this is what a computer is) and at decomposing problems and cranking through the plausible effects from the known causes. And it’s generally a boon. It also causes the worst kinds of accidents.
Perrow pointed out that some engineered systems are “tightly coupled,” where some event or other can bring cascading effects that are hard to predict or foresee. Aircraft are tightly coupled, and in the early days of flying, planes would careen out of the sky for all sorts of reasons that weren’t anticipated or even dreamed of by the engineers and quality assurance folks and (eventually) regulatory bodies. Commercial flying is, of course, much safer today, which proves that we can learn from mistakes and iteratively improve (the “iterating” means that many people first died). There is a sense, then, in which engineered artifacts like aircraft, though “tightly coupled” in Perrow’s sense can still be tamed, to acceptable levels of risk. Commercial aviation is extremely safe today—you have about a 1 in 11 million chance of dying on a commercial flight in the US. Walking down the street in an American city is more dangerous. Really.
I have only experienced one “oh, we’re gonna crash” moment in decades of flying, on a flight from Antalya, Turkey to Ankara. People were praying in the isles, the flight attendants disappeared, the pilots said nothing, my girlfriend at the time, Anya, had turned an almost perfect shade of white. I was singing a song in my head. This sounds John Wayne-like I realize, but I recall saying in response to her question about what was happening something like maybe if you believe in God, it’s a good time to start praying. We landed, of course. Departing the plane no one said a word. The flight attendants didn’t say anything. The pilots stayed in the cabin. We passengers not a whisper either. Good times.
Tightly coupled systems can be made arbitrarily safe, but they cannot be made completely safe, and linear thinking will eventually get one into trouble. There’s a particularly nasty class of failure points that involve the safety systems themselves. Here, thinking like a computer won’t cut it. In fact, it tends to get everyone killed.
My father would always say that human error accounts for most aircraft accidents—this is statistically true—and I was primed to receive the message, as I’d been imbibing literature over the years on “deskilling” effects from automation. In complex tightly coupled systems with pilots or drivers or operators, we humans are ourselves a big failure point. Pilots are highly trained in comparison to car drivers, but airlines have an incentive to automate flying so they don’t need to hire Amelia Earhart or Charles Lindbergh to get folks home safely for the holidays. Automation is generally safe, but it also erodes the manual skills of the pilots. We have tradeoffs now, and tradeoffs typically mean that sooner or later we have a problem.
The specter of airline pilots losing their manual flying skills—or being stripped of the ability to use them—brings to mind the tragedy of the Boeing 737 Max crashes. Investigators reviewing the crashes, which killed 157 people in Indonesia and 189 in Ethiopia, have zeroed in on a software problem in the maneuvering-characteristics augmentation system, or MCAS. MCAS is necessary for the Max, unlike its older brother, the 737-800, because the former sports a redesign that fits larger engines under the wings. The engine on the Max sits farther forward, creating a vulnerability to stalling from steeper climb rates on takeoff. MCAS simply pushes the nose down—and in the process, it transfers control away from the pilots. Pushing the nose down helps when averting a stall, but too much nose-down has fatal consequences.
The 737 Max crashes had other interconnected causes. In Boeing’s case, they reach all the way to financial and corporate zeal in competing with its rival Airbus, financial incentives to save money on expensive fuel, and so on. At any rate, a fix for MCAS is now under way. Everyone learned from the mistake, even as the human cost cannot be rolled back or fixed.
Level 1 thinking about accidents is limited, but in a bell curve world it’s extremely useful. We can treat many accidents as a sequence of causes and effects, and crank through the logic of how something bad happened and why. On July 25, 2000 the supersonic Concorde, departing Charles De Gaulle Airport in Paris bound for JFK in New York ruptured a tire hitting a metallic strip on the runway on takeoff. Tire debris struck a fuel tank, causing a fire. The fire damaged one of the Concorde’s wings, causing a loss of aerodynamic lift. This is bad. “Lift” is the idea that the plane is taking off. Loss of lift means it’s not.
Even so, the Concorde might still have been saved, but the fire caused a turbine (engine) failure, and so the aircraft had compromised lift dynamics and not enough thrust to overcome it. Everyone died. “AI” could have figured this one out, as it’s a linear sequence of cause and effect. But not all accidents are like this. Some are through the looking glass, as the very systems put in place to prevent catastrophes cause them.
When Germanwings Flight 9525 flew directly into the side of a mountain in the French Alps, killing all on board, investigators discovered that one cause was the safety system itself, put in place in aircraft after the 9/11 attacks. The Germanwings captain, leaving the cockpit for the bathroom, was locked out by the co-pilot, Andreas Lubitz, who then set the autopilot to descend into a mountain, killing all 144 passengers and six crew on board. Like perhaps the Boeing 737 Max tragedy, and even Notre-Dame, the accident seems predictable in hindsight. It also shows the sad wisdom of Perrow’s decades-old warning. On Flight 9525, the cockpit door was reinforced with steel rods, preventing a terrorist break-in, but making it impossible for the captain to break in as well. When Lubitz failed to respond to the distraught captain’s pleas to open the door, the captain attempted to use his door code to reenter. Unfortunately, the code could be overridden from the cockpit (presumably as further defense against entry), which is precisely what happened. It was Lubitz only in the cockpit—suicidal, as we now know—for the remainder of the tragic flight. It’s tempting to call this a case of human will (and it was), but the system put in place to prevent pernicious human will enabled it.
This example shows how even our most commonplace Level 1 technologies disguise maddening and sometimes horrific complexity. Linear thinking suggested that, to keep the bad guys out of the cockpit, we should reinforce the cockpit door. That those steel rods might someday keep the authorized pilot in charge from assuming control of the plane never came up, in all that cause and effect thinking intent on plugging some hole and fixing some glitch. It’s obvious, too, that “AI” would be no help—or people who think like computers, for that matter—because induction would suggest that there are no prior cases of the reinforced cockpit doors not keeping out the bad guys. Deduction wouldn’t help either, as the logic of the situation points to a successful fix—hijackers really can’t get in—and only an “exogenous” hypothesis not in the deduction would introduce the weird case of the pilot banging on his own cockpit door, stopped dead in his tracks by the superlative safety of the system. It’s ironic, and tragic. And it’s not so uncommon.
The increasing complexity of modern human-machine systems means that, depressingly, unforeseen failures are typically large-scale and catastrophic. The collapse of the real-estate market in 2008 could not have happened without derivatives designed not to amplify financial risk, but to help traders control it. Boeing would never have put the 737 Max’s engines where it did, but for the possibility of anti-stall software making the design “safe.”
What I call abductive thinking comes to the rescue here, as what we need is to introduce an entirely new hypothesis that is not a straightaway consequence of the prior cause or causes: but what if the “bad guy” is one of the pilots already in the cockpit? Doesn’t the Level 1 cockpit door then become a weapon against safety? Sadly, no one prior to the doomed flight entertained that hypothesis, and linear puzzle solving ruled the day. Decades after some half-wit wore sneakers with a bomb in them, we’re still taking our shoes off going through security at the airport. Besides being a pain, does it hurt anything? Maybe not. But maybe it distracts us and gives us a false sense of security. Maybe we should be thinking about something else.
As in the title of this post, I want to say here that “all logic goes to the devil” sometimes, with complex tightly coupled systems, but it’s not technically true. All the computational types of logic and our own fondness for linear thinking takes us to “the devil,” and to catastrophe. [I am not talking here, literally, of the devil, in case that’s not clear.] But inferences that abduce explanations and hypothesize other states of affairs are uniquely human, and liberating our best thinking seems necessary for our complicated world. We might wrest ourselves loose from the curse of thinking like a puzzle solving computer, and from pressing computers into service everywhere in an attempt to replace us feeble humans. The world seems intent on pooh-poohing the human mind and its powers, but any fool should know that it’s our minds that eclipse our failures and pierce the veil of our own ignorance. We might want to keep them around.
You can read the whole article here.
Erik J. Larson
I loved this one. I have a couple of thoughts. (Forgive me if I'm raising topics you've covered in your earlier work.)
In my demimonde — philosophy — the term "abduction" can refer to either of two things. There's the pattern of hypothesis generation that Charles Sanders Peirce identified and characterized (in a mind-bogglingly variety of ways), and there's the pattern of hypothesis-generation-cum-hypothesis-concluding, the latter being called nowadays "inference to the best explanation." Though I've become fairly familiar with Peirce's semiology because of my pet research interests, I'm not, to my satisfaction, very knowledgeable about his views on abduction.
But I teach inference to the best explanation all the time: what it looks like when it goes well, its basic structure, the conditions under which it's possible for it to go well, and what to do when those conditions don't obtain. One thing that always strikes my students is how common and ordinary such reasoning is. That noise must be the skunk digging under the porch again. The children are suddenly quiet, so they must be doing something they know they shouldn't. That puddle in the laundry room must mean the washer's gone awry again.
Human beings are constantly, and virtually effortlessly, inferring the best explanation to everyday phenomena, implicitly entertaining rival explanations and ruling them out based both on our general understanding of things and on our understanding of the context, the latter understanding often including very subtle perceptions of relevance and salience. My students can very quickly be brought to see that brute computation, no matter how massive, simply cannot manifest such humble reasoning.
What you've crucially explained so well here is that AI is almost laughably impotent when it comes to reasoning about extraordinary things, such as plane crashes and outré scientific phenomena. What I'm suggesting — regarding inference to the best explanation, for, like I said, I'm not knowledgeable about Peircean abduction — is that AI is also almost laughably impotent when it comes to reasoning about utterly quotidian things.
Erik, another great article. I, too, have always been fascinated by accidents, and during the lockdown I stumbled on a documentary series on airline accidents. (Wish I could remember the name of it.)
Long ago I came to the conclusion that there is no such thing as an accident. Somebody f’ed up. Of course, I applaud any effort to make the products we use safer, but our society has gone way overboard in trying to create a risk-free environment. It’s how the government got away with abandoning one hundred years of knowledge on managing a virus and scared people into compliance with the lockdown, masking, social distancing, plastic barriers at checkout counters and vaccines.
The TSA drill we all go through at the airport does exactly what you mentioned: it gives people a false sense of security. Almost daily, a TSA employee, an FBI agent, or a U.S. Marshall forgets her loaded firearm in some U.S. airport bathroom.
People screw up. We need to get over it.