To address software challenges, engineers must rethink the technology to provide the highest levels of safety and functionality to end-users. This article discusses the main safety concerns and the current state of self-driving software.
What Is Autonomous Driving?
The (nuanced) terms ‘autonomous’ and ‘self-driving’ are not one-size-fits-all. By definition, there are five levels of autonomous driving, each of which are listed below.
Level 0 applies to conventional cars and trucks where a human driver is responsible for starting, steering, and braking the vehicle.
Level 1 vehicles contain driver assistance features for intuitive control. However, a human is fully responsible for both the essential driving operations and the monitoring of the environment.
Level 2 (aka ‘hands-off’)
Level 2 vehicles are semi-automated: computer software provides steering or acceleration, although a human is responsible for all safety-critical controls as well as the monitoring of their surroundings.
Level 3 (aka ‘eyes off’)
The autonomous vehicle (AV) software in Level 3 vehicles performs all the steering, acceleration, and braking functions, as well as the monitoring of the environment. However, road speeds above 37 miles per hour may require human intervention.
Level 4 (aka ‘mind off’)
Level 4 autonomous driving not only performs all the functions of Level 3 (the steering, acceleration, braking, and environment monitoring) but also relieves the driver of safety-critical operations.
Braking, steering, acceleration, and environment monitoring features are fully automated in Level 5 AVs. Such AVs do not require any form of human intervention to operate: they depend on a combination of sensory and computing hardware and software for driving and environment monitoring.
With the current state of the driverless vehicle industry, leading automakers have only managed to achieve Level 4 autonomous driving. Software challenges pose a significant challenge to achieving Level 5 (full self-driving).
Autonomous vehicle software concept. Pictued: A behind-the-steering-wheel view of a self-driving car, whose machine vision system takes stock of the vehicles in front of it and displays the result on the car's infotainment screen.
How Do Self-Driving Cars Perceive the World Around Them?
Autonomous driving software is powered by discrete AI-focused chips that are embedded within the relevant vehicles. These chips provide the capability to run neural networks.
The primary function of such self-driving chips is to handle the large volumes of data coming in from ultrasonic, radar frequency ranging, light detection and ranging (LIDAR) sensors, and visual spectrum cameras that are integrated throughout the vehicle. Tesla and NVIDIA are two manufacturers leading innovation in AV chip design and production.
Tesla’s best autonomous driving chip is its HW3 FSD computer: a 260 mm2 silicon chip that contains 6 billion transistors. It comprises two neural network arrays for redundancy, each delivering 36 trillion operations per second (TOPS).
NVIDIA’s Drive AGX Pegasus comprises an embedded supercomputing platform that processes input from LIDAR, camera, and radar sensors, with a theoretical 320 TOPS of computing performance.
The Need for Software Improvements in Autonomous Vehicles
The main goal of self-driving cars is to make road transport inherently safer by eliminating the chief cause of accidents: human error. For vehicles to be fully autonomous, they must be able to understand the world around them at the same level as—or ideally even better than—humans.
Achieving such a feat would demand the high level of programming required for the cars to take the correct action in all kinds of scenarios. The following are some of the most critical aspects to improve current AV software systems.
Obstacle Identification and Collision Avoidance
In one unfortunate accident in 2018, Uber’s self-driving car fatally ran into a middle-aged woman in Arizona. According to U.S. safety investigators, the tragic crash was due to “safety flaws” in the vehicle’s software: it failed to identify her as a pedestrian.
Further, into Uber’s report, it states that the AV’s onboard computers detected the woman “5.6 seconds before impact, but didn’t correctly identify her as a person”. The public uproar following the accident raised concerns about the safety of AVs in public spaces.
A close-up of a GPS navigation system that is embedded in the front-side interior of an automated vehicle.
Navigation and Location Mapping
Autonomous vehicles depend on both onboard sensors to detect obstacles in the surroundings as well as detailed maps of streets, signs, and public infrastructure to navigate safely.
However, current navigation technology is tailored to conventional automobiles and not AVs. Moreover, GPS software, such as Apple Maps and Google Maps, are highly inaccurate in areas with limited mappings, such as remote towns or rural areas.
In conventional vehicles, this challenge isn’t much of a problem: a driver can easily rely on memory to find directions. Unfortunately, AVs depend solely on established GPS-based maps for navigation and, unlike humans, the software is poor at improvising.
Another potential threat due to the cyber-physical nature of AVs is the possibility for hackers to exploit security vulnerabilities in self-driving software. These failure points could allow attackers to take control of several AV controls, including the dashboard, in-vehicle infotainment system, steering, transmission, and braking systems. Moreover, hackers can steal sensitive information or cause operating systems to make driving errors.
The Current State of Driverless Software: Challenges Remain
Tesla’s Autopilot and Google’s Waymo are two of the most prominent self-driving software in the AV industry. Both software use AI algorithms for decision-making based on input from sensors and autonomous chips embedded in AVs.
Daily, Tesla gathers large volumes of data from thousands of its AVs in order to analyse both the performance of Autopilot and the ways in which the software can be optimised.
The Autopilot software, however, grapples with several safety and performance issues. According to a recent review by a Tesla Model 3 user, Autopilot incorrectly detected the maximum speed of the road and also drove straight through potholes.
Google’s Waymo software also faces its fair share of technical challenges. According to reports, Waymo can become confused when faced with unusual circumstances (such as overcrowded areas), and it performs poorly in inclement weather.
Coupled with the unpredictable behaviour of pedestrians and other vehicle owners that share roads with AVs, autonomous driving software will require significant improvements in the coming years to fully address the safety and reliability concerns before Level 5 autonomy may become a reality.