https://mcity.umich.edu/new-way-test-self-driving-cars-cut-99-9-validation-costs/
In essence, the new accelerated evaluation process breaks down difficult real-world driving situations into components that can be tested or simulated repeatedly, exposing automated vehicles to a condensed set of the most challenging driving situations. In this way, just 1,000 miles of testing can yield the equivalent of 300,000 to 100 million miles of real-world driving.
While 100 million miles may sound like overkill, it’s not nearly enough for researchers to get enough data to certify the safety of a driverless vehicle. That’s because the difficult scenarios they need to zero in on are rare. A crash that results in a fatality occurs only once in every 100 million miles of driving.
Yet for consumers to accept driverless vehicles, the researchers say tests will need to prove with 80 percent confidence that they’re 90 percent safer than human drivers. To get to that confidence level, test vehicles would need to be driven in simulated or real-world settings for 11 billion miles. But it would take nearly a decade of round-the-clock testing to reach just 2 million miles in typical urban conditions.
Beyond that, fully automated, driverless vehicles will require a very different type of validation than the dummies on crash sleds used for today’s cars. Even the questions researchers have to ask are more complicated. Instead of, “What happens in a crash?” they’ll need to measure how well they can prevent one from happening.
“Test methods for traditionally driven cars are something like having a doctor take a patient’s blood pressure or heart rate, while testing for automated vehicles is more like giving someone an IQ test,” said Ding Zhao, assistant research scientist in the U-M Department of Mechanical Engineering and co-author of the new white paper, along with Peng.
To develop their accelerated approach, the U-M researchers analyzed data from 25.2 million miles of real-world driving collected by two U-M Transportation Research Institute projects—Safety Pilot Model Deployment and Integrated Vehicle-Based Safety Systems. Together they involved nearly 3,000 vehicles and volunteers over the course of two years.
From that data, the researchers:
Identified events that could contain “meaningful interactions” between an automated vehicle and one driven by a human, and created a simulation that replaced all the uneventful miles with these meaningful interactions.
Programmed their simulation to consider human drivers the major threat to automated vehicles and placed human drivers randomly throughout.
Conducted mathematical tests to assess the risk and probability of certain outcomes, including crashes, injuries, and near-misses.
Interpreted the accelerated test results, using a technique called “importance sampling” to learn how the automated vehicle would perform, statistically, in everyday driving situations.
The accelerated evaluation process can be performed for different potentially dangerous maneuvers. Researchers evaluated the two most common situations they’d expect to result in serious crashes: an automated car following a human driver and a human driver merging in front of an automated car. The accuracy of the evaluation was determined by conducting and comparing accelerated and real-world simulations. More research is needed involving additional driving situations.
The paper is titled “From the Lab to the Street: Solving the Challenge of Accelerating Automated Vehicle Testing.”