Are autonomous vehicles safer than cars operated by humans?

Q:

Are autonomous vehicles safer than cars operated by humans?

A:

As one of the most fundamental questions in the technology world, this one is a little tough to answer. The long and short of it is that self-driving cars, or autonomous cars, could be safer than vehicles operated by humans, but need a lot of additional engineering to prevent specific kinds of cognitive blind spots that can easily lead to accidents and even fatalities.

The easy way to say it is that self-driving cars have the potential to be safer, but are not safer right now in practice. (Also read: Hacking Autonomous Vehicles: Is This Why We Don't Have Self-Driving Cars Yet?)

Tesla's autopilot technology is a good example — we've seen fatalities happen when drivers trust the vehicle to this technology (to be fair, Tesla has always been explicit about the autopilot as only a partial self-driving technology and asked users not to entrust the vehicle to it fully).

On the other hand, efforts like that undertaken by Waymo show that statistically prove autonomous cars have been extremely safe. Waymo reports that after 10 million miles driven, there are zero fatalities and very few accidents.

Relaxed man driving by smart car vector illustration. Autonomous vehicle self driving car equipped sensing and wireless communication flat style. Future technologies concept. Cityscape on background

The reasons for this discrepancy have to do with the ways that self-driving cars operate. Autonomous vehicles are orders of magnitude safer in preventing many types of accidents that have to do with human driver error — for instance, simply rear-ending another vehicle because the driver wasn't paying attention.

These types of accidents will practically never happen with autonomous cars.

However, one glaring issue in autonomous vehicle design is called the value learning problem.

The value learning problem addresses the issue of technologies not being able to clearly identify abstract risks or abstract goals in the ways that humans do. Experts explain that human goals and objectives are complex and based on a number of different abstractions.

Some can be programmed for; others resist practical programming solutions.

As an example, a prominent fatality in the Tesla autopilot case involved an unusual physical wedge in a diverging freeway. The autopilot software failed to detect the unusual obstacle, and this caused the tragic accident.

A clear distinction, then, is to note that self-driving cars are extremely safe for some kinds of accident risks, and very unsafe for others.

Have a question? Ask us here.

View all questions from Justin Stoltzfus.

Share this:
Written by Justin Stoltzfus
Profile Picture of Justin Stoltzfus

Justin Stoltzfus is a freelance writer for various Web and print publications. His work has appeared in online magazines including Preservation Online, a project of the National Historic Trust, and many other venues.

 Full Bio