Back in July 2015, an experiment was conducted with a couple of journalists from Wired, which showed how easily a Jeep Cherokee could be hacked and driven remotely. The public was flabbergasted by this – oh dear! – unexpected discovery and everybody started murmuring about the alleged lack of safety of autonomous vehicles. This fear is now so widespread and intense that some have already defined the hacker threat as the reason why self-driving cars will never become a reality. Even a few accidents may prevent this technology from reaching its full development. But is this fear really justified? Are non-autonomous cars truly more secure, or is it the other way around?

Why Are People so Scared of Hacking?

All technologies seem 100 percent safe when they’re new. But as we learned with emails and operating systems back in the ’90s and early 2000s, nothing is safe as soon as it is released to the public. This is especially true with self-driving cars, as some of the AI that controls them is still partially unidentified. The mathematical model that powers the AI of Nvidia’s drive systems doesn’t rely on instructions provided by programmers or engineers. It is a fully autonomous deep-learning-based intelligence that slowly “learns” how to drive by watching humans do it. In their latest report, released in October 2018, the computer graphics cards manufacturer explained how their Drive IX system is able to track a driver’s head and eye movements, further enhancing the integration between humans and machines. Nonetheless, the less we know of a system, the harder is to protect it from unwanted intrusions.

The Consequences of Self-driving Car Hacking

When hacking occurs in a data center, the worst that can happen is a loss of data. When a self-driving car is hacked, what can happen is a loss of life. However, carmakers are used to engineering problems as they are discovered, an approach which is not acceptable when so much is at stake. On the other hand, self-driving vehicles are designed to eliminate most of the million global road deaths a year, which are a very present and real threat. Will the dangers of being hacked by a crazy cybercriminal outweigh the dangers attached to human driving? Some data to crunch will provide the answer.

The first consideration we must make is that people are not going to accept self-driving cars if their level of security is the same as human driving. According to a study published by the Society for Risk Analysis, the current global traffic fatality risk associated with human errors is already 350 times greater than the frequency accepted by the public. In other words, for autonomous cars to be tolerated, they must improve safety on the roads at least by two orders of magnitude. This may be due to a certain level of perception bias against the safety of machines, though. It’s, in fact, interesting to note what General Motors Co. told California regulators about their accident reports in September 2018. In all six crashes where self-driving vehicles were involved, those responsible for the accidents were always human drivers.

Another key argument against self-driving cars’ safety comes from the fact that most statistics about car crashes focus on actual collisions. In other words, we collect data and discuss it only when the tragedy has already occurred. But what about the billions or trillions of accidents which have been avoided? We cannot measure the number of non-collisions, so how can we determine the ability of an AI compared to a human at not crashing when things go sour, such as when the weather is bad or when you must drive on a steep slope or dirt road, or when a pedestrian unexpectedly steps into the road? Right now, we can’t – at least, not in a reliable way. And the situation may get worse if hacking attempts (even failed ones) can tamper with the delicate controls of autonomous vehicles. (To learn more about self-driving cars, see The 5 Most Amazing AI Advances in Autonomous Driving.)

Are Self-driving Cars More Vulnerable to Hacking?

Who says that self-driving vehicles are more vulnerable to hacking than traditional cars? The idea of a hacker taking the wheel of the car we’re driving definitely sounds terrifying, yet this is possible already with non-autonomous cars because of the many vulnerabilities of their internet-enabled software. Back in 2015, a security hole in FCA’s Uconnect allowed hackers to take control of a “traditional” Fiat Chrysler, forcing the manufacturer to recall more than 1 million vehicles. Even the “experiment” described above with the Jeep Cherokee involved a normal, internet-connected car rather than a self-driving one.

In theory, the inherent interconnectivity between multiple sensors and communication layers of autonomous vehicles could make them more exposed to cyberattacks since they offer more “entry points.” However, hacking a connected self-driving car is also much more difficult… for this same reason. Having to find access to a multi-layered system that integrates information coming from several sensors as well as real-time traffic and pedestrian data may constitute a serious obstacle for hackers. IoT-related solutions can also be applied to enhance their security at an exponential level, such as integrating secure encryption systems based on quantum mechanics.

Once again, though, hackers can use these same IoT connections to their advantage to breach the autonomous vehicle’s cyber defenses before they are set in place. Attackers can leverage the production line and supply chain vulnerabilities to infiltrate a self-driving car even before it is ready. This stage is extremely delicate, and the former leading smartphone manufacturer BlackBerry announced their commitment to preventing such loopholes with its upcoming software for autonomous vehicle security, Jarvis.

What Are the Plans to Address the Problem?

Which potential countermeasures are the best? Solutions include potential cybersecurity risk mitigation plans in the design and manufacturing process since cyber resilience must be effectively implemented in the design phase of the vehicle. Experts already warned against the current carmakers’ propensity to retrofit non-autonomous vehicles with a few additional sensor pods. This may be okay now, when engineers are still stuck with prototypes and need to test out the various functionalities of these vehicles, but later this approach is doomed to be largely insufficient to guarantee any degree of safety.

Other cybersecurity measures can be employed beyond the vehicle itself and may work on all those additional technologies that constitute the “environment” where self-driving cars operate (smart poles, sensors, roads and other infrastructure). For example, a stolen hacked vehicle can be stopped as soon as the GPS finds it is in a place it shouldn’t be. Eventually, as self-driving vehicles start substituting non-autonomous ones on a large scale, the entire infrastructure of all smart cities will change, and security will become an integral part of the network.

Since no hostile hacker has actually targeted self-driving vehicles so far, no real cybersecurity tests have been run to protect the self-driving software in a realistic setting. Adversarial machine learning needs real “foes” to be trained; otherwise manufacturers are just exposing their flanks to threats for which nobody is ready. As Craig Smith, research director at cyber analytics group Rapid7, explained in an interview “Google has been a target of cyber attacks for years, whereas the auto industry hasn’t, so they have some catching up to do.” In this regard, carmakers seem particularly weaker than other companies, as they’re not so used to preventing problems (especially those which are completely out of their field).

Curiously enough, though, the solution may come from other industries where engineers already possess a significant degree of knowledge in protecting vehicles from malicious attacks. One such example is GuardKnox, a company that can protect entire fleets of cars, buses and other vehicles by deploying a security technology which was employed to protect Israeli jet fighters. Yes, the F-35I and F-16I fighter jets, to be specific. Seriously. Jet. Fracking. Fighters. Deal with that, hackers!

This exciting and unique protection solution proposed by the company GuardKnox has been used for some other high-level security systems such as the Iron Dome and the Arrow III missile defense systems for quite a while. The system enforces a formally verified and deterministic configuration of communication among the various networks of the vehicle that blocks any unverified communication. Any external communication that tries to access the vehicle’s central gateway ECU must be verified, effectively locking down the entire system no matter how many vulnerable access points are present. Centralization is critical to prevent hackers from accessing the core system of the autonomous car or its systems, such as the brakes or wheels, from its communication network. (For more on ECUs, see Your Car, Your Computer: ECUs and the Controller Area Network.)

What the Future Holds

Every new generation of automobile technology comes with its own hazards and security risks. Self-driving cars are no exception, and right now we can safely assume that the cybersecurity risks associated with them are somewhat understudied. However, they’re not underestimated at all. In fact, all the attention currently given to these perceived risks is only helping to encourage the more in-depth research required to manufacture the upcoming generation of autonomous vehicles in the safest way possible. As Moshe Shlisel, GuardKnox CEO and Co-Founder clearly pointed out, “manufacturers are now adopting a multi-layered approach to vehicle security, implementing state-of-the-art hardware and software changes, in order to enhance their ability to withstand malicious attacks.”