As one of the most fundamental questions in the technology world, this one is a little tough to answer. The long and short of it is that self-driving cars, or autonomous cars, could be safer than vehicles operated by humans, but need a lot of additional engineering to prevent specific kinds of cognitive blind spots that can easily lead to accidents and even fatalities.
The easy way to say it is that self-driving cars have the potential to be safer, but are not safer right now in practice. (Also read: Hacking Autonomous Vehicles: Is This Why We Don't Have Self-Driving Cars Yet?)
Tesla's autopilot technology is a good example — we've seen fatalities happen when drivers trust the vehicle to this technology (to be fair, Tesla has always been explicit about the autopilot as only a partial self-driving technology and asked users not to entrust the vehicle to it fully).
However, new reports like this one from PolicyAdvisor show how self-driving cars are generally considered safer than those conducted by human drivers.
The reasons for this discrepancy have to do with the ways that self-driving cars operate. Autonomous vehicles are orders of magnitude safer in preventing many types of accidents that have to do with human driver error — for instance, simply rear-ending another vehicle because the driver wasn't paying attention.
These types of accidents will practically never happen with autonomous cars.
With that in mind, one way to assess the overall safety of autonomous vehicles is to say that in general, there will be fewer accidents, but that the rare ones that do occur will be more serious than the average fender-bender accident involving vehicles driven by humans.
To put this another way, computers may be less likely to make errors in the first place, but also less capable than humans at self-correcting if something does go wrong.
In fact, as innate problem solvers, humans outperform computers in many ways.
One glaring issue in autonomous vehicle design, related to the inability to fully simulate human response, is called the value learning problem.
The value learning problem addresses the issue of technologies not being able to clearly identify abstract risks or abstract goals in the ways that humans do. Experts explain that human goals and objectives are complex and based on a number of different abstractions.
Some can be programmed for; others resist practical programming solutions.
As an example, a prominent fatality in the Tesla autopilot case involved an unusual physical wedge in a diverging freeway. The autopilot software failed to detect the unusual obstacle, and this caused the tragic accident.
A clear distinction, then, is to note that self-driving cars are extremely safe for some kinds of accident risks, and very unsafe for others, although they are constantly improving.
New innovations improve the safety of autonomous vehicles, too. For instance, the use of lidar lasers and assistive driving technologies improves the capability of self-driving systems. New assessment programs by agencies like the National Highway Traffic Safety Administration also address autonomous vehicle safety to make these new vehicles safer on the road. Distinguishing between passenger vehicles and freight vehicles, regulators can focus on creating top-tier safety solutions for solving safety issues for self-driving vehicles that will have people inside them.