Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

If To Err Is Human, Should Technology Help Us Shed Some Humanity?

This 1947 photo shows an early airplane autopilot system.
George Stroud
/
Express/Getty Images
This 1947 photo shows an early airplane autopilot system.

By all accounts, technology has made us safer.

Cars are more maneuverable because of tire design changes. Jet engines are less likely to fail midflight thanks to better propulsion mechanics. Clinical diagnoses are more accurate thanks to improvements in medical imaging.

Over the past half-century, such advances have forced a drop in deaths caused by technical lapses. And now, technology is used to reduce fatalities caused by human error. But we need more.

Here's an example. Because speeding is a significant contributor to road fatalities, the Canadian province of Ontario has mandated so-called speed limiters, or governors, to be installed on heavy trucks. Once the vehicle reaches a preset top speed — 65 miles per hour in Ontario — this tiny microchip restricts the flow of air and fuel to the engine, preventing the driver from accelerating.

In aviation, a similar design principle underlies "fly-by-wire" flight control systems, computer automation that can prevent a pilot from maneuvering an airplane in ways predetermined as unsafe by the manufacturer.

These technologies have had the intended effect. In Ontario, the number of fatalities caused by heavy truck accidents dropped 24 percent in the first year of the speed limiter mandate, and trucks themselves were half as likely to crash when equipped with the speed limiting technology.

And most aviation experts agree that fly-by-wire technology has helped reduce the risk of flight-related fatalities. In fact, writer and former pilot William Langewiesche has credited the 2009 "miracle" landing on the Hudson River by US Airways Flight 1549, in part, to fly-by-wire.

Yet for all its benefits, technology such as speed limiters and fly-by-wire do not prevent humans from erring. The freedom to make mistakes still exists. Truckers in Ontario cannot drive faster than 65 mph, but they can still try. Pilots cannot fly an airplane beyond its structural limits, but they can make relentless attempts.

The success of such technology is in accommodating, rather than addressing human error. As a result, these errors can persist in perpetuity because their true potential threat is obscured.

Such pitfalls of technology have long worried scientists and safety advocates. A landmark 1983 paper "Ironies of Automation" explored how automation technology may do more harm than good. More recently, National Transportation Safety Board Chairman Chris Hart warned after the 2013 Asiana plane crash in San Francisco:

"In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid."

Or, as the late automation pioneer Raja Parasuraman put it, our reliance — and often, overreliance — on technology for safety may lead to "a spectacularly new type of accident."

It's a subject studied by numerous industry task forces, blue-ribbon panels and working groups. Many of them agree: Overreliance on automation leaves workers less able to do their jobs independently and more prone to making errors.

But one argument isn't made often, and it should be. By only guarding against human fallibility, technology implicitly rewards human errors.

This can prove fatal, as it did in a 2014 crash of a private jet making a seemingly routine flight from the Boston suburb of Bedford to Atlantic City.

Procedures required that the airplane's gust lock — the flying equivalent of a parking brake — be released before takeoff. The crew neglected to do so. Consequently, instead of getting airborne, the airplane overran the runway at more than 170 mph before crashing into a ditch and bursting into flames — killing all seven people on board.

Technology meant to prevent this was available: a mechanism designed to stop pilots from applying takeoff power while the gust lock is on. But a design flaw prevented it from working as intended, which meant there was nothing to protect the pilots from the impact of not following procedures.

Avoiding accidents like this requires a different approach, one that not only provides a technological fail-safe against human error but also reshapes the underlying behavior that causes those errors.

For example, the most common reason for driving errors is a lack of awareness. Given that video game play has been shown to improve focus and concentration, why not use game-based training to improve driver awareness?

Similarly, some 26 percent of pilot errors are attributed to carelessness (like forgetting to lower an airplane's landing gear), 23 percent to flawed decision-making (like attempting to land when one shouldn't) and 11 percent to poor interaction between crew (like a first officer not passing along weather concerns to the captain). Why not design systems that continuously search for such errors and automatically prescribe remedial action?

We count on technology to save us from ourselves. But what we need are systems that not only complement our abilities, but also train these abilities to be better, devices that not only contain the fallout from our mistakes, but also stop these errors from occurring. If to err is human, we need technology that helps us shed some of our humanity.

Ashley Nunes is an independent consultant who specializes in transportation safety, regulatory affairs and behavioral economics.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Ashley Nunes