When Making Things Better Only Makes Them Worse

Our very attempts to stave off disaster make unpredictable outcomes more likely.

Fire at Notre-Dame cathedral
Benoit Tessier / Reuters

Accidents are part of life. So are catastrophes. Two of Boeing’s new 737 Max 8 jetliners, arguably the most modern of modern aircraft, crashed in the space of less than five months. A cathedral whose construction started in the 12th century burned before our eyes, despite explicit fire-safety procedures and the presence of an on-site firefighter and a security agent. If Notre-Dame stood for so many centuries, why did safeguards unavailable to prior generations fail? How did modernizing the venerable Boeing 737 result in two horrific crashes, even as, on average, air travel is safer than ever before?

These are questions for investigators and committees. They are also fodder for accident theorists. Take Charles Perrow, a sociologist who published an account of accidents occurring in human-machine systems in 1984. Now something of a cult classic, Normal Accidents made a case for the obvious: Accidents happen. What he meant is that they must happen. Worse, according to Perrow, a humbling cautionary tale lurks in complicated systems: Our very attempts to stave off disaster by introducing safety systems ultimately increase the overall complexity of the systems, ensuring that some unpredictable outcome will rear its ugly head no matter what. Complicated human-machine systems might surprise us with outcomes more favorable than we have any reason to expect. They also might shock us with catastrophe.

When disaster strikes, past experience has conditioned the public to assume that hardware upgrades or software patches will solve the underlying problem. This indomitable faith in technology is hard to challenge—what else solves complicated problems? But sometimes our attempts to banish accidents make things worse.

In his 2014 book, To Save Everything, Click Here, the author Evgeny Morozov argues that “technological solutionism”—leaving the answer up to Silicon Valley—causes us to neglect other ways of addressing problems. In The Glass Cage, published the same year, Nicholas Carr points warily to “deskilling,” which occurs when the skills of human operators working a job begin to erode, as automation makes such capacities unnecessary. On average, automation is safer than error-prone humans, so a typical response to deskilling is “So what?”

The specter of airline pilots losing their manual flying skills—or being stripped of the ability to use them—brings to mind the tragedy of the Boeing 737 Max crashes. Investigators reviewing the crashes, which killed 157 people in Indonesia and 189 in Ethiopia, have zeroed in on a software problem in the maneuvering-characteristics augmentation system, or MCAS. MCAS is necessary for the Max, unlike its older brother, the 737-800, because the former sports a redesign that fits larger engines under the wings. The engine on the Max sits farther forward, creating a vulnerability to stalling from steeper climb rates on takeoff. MCAS simply pushes the nose down—and in the process, it transfers control away from the pilots. Pushing the nose down helps when averting a stall, but too much nose-down has fatal consequences.

The 737 Max crashes had other interconnected causes. In Boeing’s case, they reach all the way to financial and corporate zeal in competing with its rival Airbus, financial incentives to save money on expensive fuel, and so on. At any rate, a fix for MCAS is now under way. Everyone learned from the mistake, even as the human cost cannot be rolled back or fixed.

What makes the Boeing disaster so frustrating is the relative obviousness of the problem in retrospect. Psychologists and economists have a term for this; it’s called “hindsight bias,” the tendency to see causes of prior events as obvious and predictable, even when the world had no clue leading up to them. Without the benefit of hindsight, the complex causal sequences leading to catastrophe are sometimes impossible to foresee. But in light of recent tragedy, theorists such as Perrow would have us try harder anyway. Trade-offs in engineering decisions necessitate an eternal vigilance against the unforeseen. If some accidents are a tangle of unpredictability, we’d better spend more time thinking through our designs and decisions—and factoring in the risks that arise from complexity itself.

Perhaps the centuries-old Notre-Dame is an unlikely candidate for complicated human-machine technology, but it too may qualify. The building was equipped with fire alarms, but, according to an account in a French newspaper picked up by English-language outlets, a computer bug located the fire in the wrong place. In deciding which precautions to incorporate into a fire-safety system, building custodians take calculated risks: Automatic sprinklers, tripped accidentally or unnecessarily, could ruin paintings and other precious art.

Perrow argued in Normal Accidents that two conditions must hold for there to be significant threats in technology designs that turn safety systems against themselves: One, the systems must be complex. Two, the parts or subsystems in the design must be “tightly coupled”—that is, interdependent in such a way that a failure in one can cascade through the others to a global failure. Today most of our day-to-day life is spent interacting with such systems. They’re everywhere.

When Germanwings Flight 9525 flew directly into the side of a mountain in the French Alps, killing all on board, investigators discovered that one cause was the safety system itself, put in place in aircraft after the 9/11 attacks. The Germanwings captain, leaving the cockpit for the bathroom, was locked out by the co-pilot, Andreas Lubitz, who then set the autopilot to descend into a mountain, killing all 144 passengers and six crew on board. Like perhaps the Boeing 737 Max tragedy, and even Notre-Dame, the accident seems predictable in hindsight. It also shows the sad wisdom of Perrow’s decades-old warning. On Flight 9525, the cockpit door was reinforced with steel rods, preventing a terrorist break-in, but making it impossible for the captain to break in as well. When Lubitz failed to respond to the distraught captain’s pleas to open the door, the captain attempted to use his door code to reenter. Unfortunately, the code could be overridden from the cockpit (presumably as further defense against entry), which is precisely what happened. It was Lubitz only in the cockpit—suicidal, as we now know—for the remainder of the tragic flight. It’s tempting to call this a case of human will (and it was), but the system put in place to prevent pernicious human will enabled it.

The increasing complexity of modern human-machine systems means that, depressingly, unforeseen failures are typically large-scale and catastrophic. The collapse of the real-estate market in 2008 could not have happened without derivatives designed not to amplify financial risk, but to help traders control it. Boeing would never have put the 737 Max’s engines where it did, but for the possibility of anti-stall software making the design “safe.”

In response to these risks, we play the averages. Overall, air travel is safer today than in, say, the 1980s. Centuries-old cathedrals don’t burn, on average, and planes don’t crash. Stock markets don’t, either. On average, things usually work. But our recent sadness forces a reminder that future catastrophes require more attention to the bizarre and (paradoxically) to the unforeseen. Our thinking about accidents and tragedies has to evolve, like the systems we design. Perhaps we are capable of outsmarting complexity more often. Sometimes, though, our recognition of what we’ve done will still come too late.

Erik Larson is an entrepreneur and former research scientist at the University of Texas at Austin, where he specialized in machine learning and natural language processing. He is the author of The Myth of Artificial Intelligence forthcoming from Harvard University Press.