However, public perception can be hard to change, especially in the age of social media ‘fake news’. It is therefore important that a distinction is first drawn between semi-autonomous and truly autonomous systems.
There are five levels of autonomous driving systems. Levels one through three are considered autonomous, whereas levels four and five are considered fully autonomous. A great example of a semi-autonomous vehicle is the Tesla Autopilot system whereas Alphabet’s (Google’s) Waymo vehicle is a level four fully autonomous system.
A graphic displaying five levels of autonomous driving systems. Image Credit: Synopsys.
Proving the safety and superiority of these autonomous systems is difficult, however. That is because manufacturers have not completed and compiled enough driving hours to make comparisons between non-autonomous and semi-autonomous and fully autonomous vehicles.
In a traditional human-operated vehicle, for example, there are 1.18 deaths per 100,000,000 miles. The Waymo level four autonomous vehicle mentioned above hasn’t even driven 10 million autonomous miles yet. This vehicle also required human intervention every 13 to 5,600 miles on average. Testing autonomous vehicles also means putting them in real traffic. This is problematic and has resulted in one fatal accident, Elaine Hertzberg in March 2018. Despite the National Transportation Safety Board’s conclusion that Hertzberg was traveling at night, was not paying attention at the time of the accident, and did not use a crosswalk, this has not helped consumer perceptions.
Becky (R.L.) Peterson, an associate professor of electrical engineering at the University of Michigan involved with the study. Image Credit: University of Michigan,
Using Augmented Reality (AR) to Aid Testing and Simulation of Autonomous Vehicles
It is this lack of long-term road testing data that has brought a group of University of Michigan researchers together to work to change the public perception that the public has of autonomous vehicles, that they are inherently dangerous.
The research team hopes to boost the simulation of so-called ‘edge cases’, where something unexpected happens and human drivers, who are much more prepared and experienced, are required to step in and take control.
Edge cases are very hard to test for, however, so the researchers decided to create edge case simulations for autonomous vehicles using AR. Thus far, the researchers have designed and implemented two testing scenarios using their own simulation environment.
The first sees the test car perceiving a virtual train projected into its real purview through AR. The train then comes upon a rail-crossing in Mcity, the researchers’ virtual city, with the goal of seeing if the car will stop in time and wait for the train to pass.
The second sees the car reacting to changing traffic lights and vehicles in its environment that unexpectedly run red lights. The test car should tell what color the signal is turning and then decide whether to stop or go. If a nearby virtual vehicle then runs a red light, the test car should be able to calculate its relative position and avoid a crash.
The goal of this research is to compile an entire library of edge cases that can then be used to test autonomous driving systems in AR-based simulation. This will allow tests to be run repeatedly, improving the performance of autonomous systems with each repetition. This library is presently being built using data collected from real-world collisions and drivers operating vehicles packed with sensors that reflect how drivers behave on the road.