How Safe Is Safe?

Tom Easton

One of the foundations of computer science and thus of all the gadgets that enrich (or bedevil—take your pick!) our lives is cybernetics. The root is the Greek word for “steersman” (as of a ship) and it is apt for the study and implementation of electrical devices that produce appropriate outputs in response to changes in incoming data. A very simple example is a household thermostat that turns the heat on when the temperature drops and off when the temperature rises. A nonelectrical example is a toilet tank that turns the water on when the level of water in the tank drops (as after a flush) and then turns it off when the level in the tank reaches the full mark.

You can easily think of other examples where changing inputs require changes in outputs. Driving a car would be one. We have to stay on the road when it curves, slow down when another car is in front, stop at red lights, start up again when the light changes, stop quickly when pedestrians step into the road, avoid idiots in other cars, and so on. It’s a complex task, but even teenagers can master it—at least well enough to get a driver’s license. Unfortunately, even people with years of experience make mistakes, have accidents, and kill people. Indeed, driving causes some 35,000 deaths per year in the United States alone (2015 data).

It should thus surprise no one that making cars safer has been on the cybernetic and computer science to-do list for a long time. The ultimate goal has been cars that drive themselves, with no drunken, cell-phone-using, newspaper-reading (I’ve actually seen that!) human klutz at the controls. They’re called autonomous cars or robocars.

 In 2004, 2005, and 2007, the U.S. Defense Department’s Advanced Research Projects Agency (DARPA) held “Grand Challenge” and “Urban Challenge” competitions for early versions of the technology. In 2014, “The DARPA Grand Challenge: Ten Years Later” (https://www.darpa.mil/news-events/2014-03-13 ) reported that the events “created a community of innovators, engineers, students, programmers, off-road racers, backyard mechanics, inventors and dreamers who came together to make history by trying to solve a tough technical problem.” The result was rapid progress toward driverless delivery bots, taxis, trucks, and cars by the likes of Uber, Waymo, Tesla, GM, Toyota, Volvo, and more. Performance so far has been encouraging, but the driverless cars produced so far have had accidents. As Nidhi Kalra of the RAND Corporation noted in his testimony before the U.S. House of Representatives Committee on Energy & Commerce, Subcommittee on Digital Commerce and Consumer Protection, February 14, 2017, the technology will not–indeed, cannot–eliminate all accidents. Bad weather, for instance, will always be a problem. And self-driving cars, being intensely computer-based systems, will be susceptible to problems all their own, such as (computer) crashes and hacking. Nevertheless, they hold immense potential to make driving safer. After all, they can’t get drunk and they don’t get distracted by cell-phones.

Not all self-driving cars are equal. The ultimate goal is a car that can do its job in any weather, any lighting conditions, any place, any time. That capability is known as Level 5 autonomy. This is what you want when you stumble out of a bar too plotzed to drive, crawl into the back seat, and say “Home, James.” We have a long way to go.

Level 4 autonomy is limited in terms of speed, weather conditions, location.  This is the goal for the next few years. Level 3 is similar, with the added need to tell when the attention and skills of a human driver are needed and to sound an alert. Level 2 requires a human driver on deck at all times, ready to take over at a moment’s notice. Some near-Level 2 cars are already available, and they have a serious problem in that human drivers think their cars are more autonomous than they really are. This has already led to a few “Oops” moments.

Level 1, we’ve had for a while. Level 1 cars have at least cruise control. Some can stay in lane, or help with parking.  They’re not self-driving, but they’re helpful.

Testing how well a car of any level works means racking up road miles. Yet it may not be possible to rack up enough test miles to satisfy lawmakers, insurance companies, and customers without actually putting the cars on the market. That might work, for fewer consumers now say they’d be afraid to ride in a Level 5 car (just 63 percent in 2017, versus 78 percent the year before) (Nick Kurczewski, “Consumer Confidence in Self-Driving Technology Is Increasing, AAA Study Finds,” Consumer Reports, January 24, 2018).

Why would they be afraid? Well, duh. Is it safe? How safe? How safe is safe? “When It Comes to Self-Driving Cars, What’s Safe Enough?” asks Maria Temming in Science News (November 21, 2017). Human drivers cause 1.1 deaths per 100 million miles of driving. A 10 percent improvement in safety (to one death per 100 million miles) would save 500,000 lives over 30 years.

Is it enough that a self-driving car be as safe a driver as a teenager? One of the DARPA Urban Challenge contestants was reportedly good enough to qualify for a California driver’s license. Or would you rather have a self-driving car as safe as the average driver?

Want something even safer? Is Temming’s 10 percent improvement over the average driver good enough? Perhaps not, for Temming does note that “people may be less inclined to accept mistakes made by machines than humans, and research has shown that people are” more likely to accept risks they can control. Or think they can control. None of us are as good behind the wheel as we think we are. And then there are all the other idiots on the road.

It would be interesting to run a poll that asked people how they would feel about putting those other idiots in self-driving cars. I suspect they would agree that would make the roads much safer. However, I recall one of my father’s basic instructions: “Drive like everyone else on the road is an idiot. And don’t be too sure about yourself.”

The question then becomes how to persuade everyone to adopt the new technology. I suspect that once enough self-driving cars are in use and the insurance industry has accumulated enough statistics, rates will go down for self-driving cars and/or up for manual cars. If the safety statistics are convincing enough, lawmakers may well decide to ban manual cars in cities. Eventually, the ban may be total.

That thought already has some people feeling alarmed. Charles C. W. Cooke, “The War on Driving to Come,” National Review (December 18, 2017), thinks that the technology will mean the loss–and even banning–of human driven cars and thereby infringe on his right to drive himself around. People are even saying “They will have to pry my steering wheel from my cold, dead hands,” quite as if the Second Amendment were not just about guns. And both truckers and taxi drivers fear the competition will put them out of work (see Matt McFarland, “The Backlash against Self-Driving Cars Officially Begins,” CNN Money, January 10, 2017; http://money.cnn.com/2017/01/10/technology/new-york-self-driving-cars-ridesharing/index.html).

Will self-driving cars happen anyway? Of course they will. The only real question is when. And the answer to that very much depends on what we decide about how safe is safe enough.

Leave a Reply

Your email address will not be published. Required fields are marked *