How Many Test Miles Make A Vehicle Safe?

Simulation and test can improve safety, but that requires a standard framework and definitions.

popularity

The road to reliable safety testing of autonomous vehicles (AVs) is shifting left. Standards groups are beginning to publish functional safety standards that could make it possible to verify what a machine-learning AV pilot application will do in a traffic situation even before hardware or software is released from validation testing.

This kind of approach has been possible for some time in SoCs, where simulations are run for tens of thousands of possible use cases prior to the release of products. But in the automotive world, or any safety-critical applications, this represents a radical new approach.

“There’s an estimate that to get to fully autonomous driving you will have to drive 8 billion to 10 billion miles,” said Tony Hemmelgarn, CEO of Mentor, a Siemens Business and president and CEO of Siemens PLM Software. “No one is going to drive 8 billion to 10 billion miles. I saw an article that said one company is in the lead because they’ve driven close to 9 million miles. I don’t believe anyone is in the lead. It’s about how you leverage software to validate these edge cases, because we can run these edge cases and start iterating to get to those billions of miles using a programmatic method.”

Last month, Waymo did exactly that. And tools are being rolled out by a variety of companies, such as Israeli startup Foretellix, to collect minor changes in behavior of a single AV across thousands of variations of the same simulated decision point. From there, they can come up with a result that has enough statistical validity to qualify as an answer.

Meanwhile, Intel and a coalition of 10 other companies — including BMW, Volkswagen, Audi and Continental — came out with a white paper entitled “Safety First for Automated Driving.” The paper is basically an outline and set of functional safety definitions and test procedures for a contextual background and set of requirements, without which it is impossible to ever know when there has been enough testing.

“The question I ask people is, ‘What is your test coverage plan,” said Jamie Smith, director of global automotive strategy at National Instruments. “How do you know that you’ve tested enough? One answer I got on a panel was that you’re never done testing, which was just a general statement, not a policy. But it’s what a lot of people think.”

Defining safety
The endless road-test-mile challenge was based on the number of road miles driven by humans for every accident with an injury or fatality, and extrapolated from that a statistical assumption that AVs would have to drive millions of miles to pile up enough safely driven miles to make it statistically unlikely that AVs being tested were unsafe.

“At some point we’re just going to have to agree what the test cases are that one needs to test against and do it that way,” Smith said. “The question will be, ‘Does that testing need to be done by an external organization? Or can people self-test and just document that they’re compliant?'”

That basically is what people have been doing.

“The biggest piece missing is a formal definition of what it means for an automated vehicle to drive safely because, frankly, without that you have nothing to measure or test against,” said Jack Weast, senior principal engineer at Intel and vice president of autonomous vehicle standards at MobilEye (which Intel bought in 2017). “The existing regulatory guidance, or lack thereof — and existing industry standards like ISO 26262 and even emerging standards, SOTIF and UL4600 — are all necessary but not sufficient.”

In the United States a “safe” car is one that complies with all the requirements of the Federal Motor Vehicle Safety Standards (FMVSS), which are regulations maintained by the National Highway Traffic Safety Administration (NHTSA). That includes a list of all the equipment that has to be built into a safe car, such as steering wheels and brake pedals, and what has to happen when the wheel is turned or the brake depressed. It doesn’t say much about electronics of advanced driver assist systems or autonomous driving, though, because FMVSS was written during the era when the only way a car would move was with a human at the controls. NHTSA has been promising updates to FMVSS for years, but hasn’t issued any regulations covering AVs or advanced driver assist systems, although it has come out with three iterative versions of voluntary guidelines.

The tech industry is moving to get around that gap with some data sharing about issues, according to Roger Lanctot, director of the automotive connected mobility practice at Strategy Analytics. “NHTSA has done all it can on passive safety with FMVSS the way it is now.”

Industry fragmentation
In fact, NHTSA is talking about how to handle “active safety,” Lanctot said, pointing to electronically enabled smart braking, lane-keeping, automatic parking and other advanced driver assist functions. NHTSA is not moving fast enough to keep up, but the industry is also pretty fragmented.

“There are half a dozen simulation companies coming to the front, and there are alliances, but each kind of on its own,” he said. “Intel came out with its proposal with some of the companies it aligned with. Nvidia came out to say its working with a group in Europe. Ford and VW are working on their own. So are GM and Toyota. But they’re going in their own directions and working on full autonomy. Who’s going to be able to afford that? Robotaxi? You might as well buy a limousine. Look at the problem. Cars shouldn’t hit things. There is technology to address that, but no business case for safety. Some of the alliances might come together, but there’s no big incentive. The auto industry is traditionally pretty standoffish about things like this.”

Others agree. “There’s a lot of discussion about use case requirements and engineering activity,” said Kurt Shuler, vice president of marketing at Arteris IP. “But when it comes to things like functional safety and security, it varies by tribe. There is similar language, but they’re not always talking about the same things or the same meaning.”

This is exacerbated by the fact that online updates can basically create different behavior in a car.

“Car companies can send out new software and create a new car with a new personality,” Shuler said. “In the past, you had to do a whole bunch of testing before you pushed out software. Now, a real-world test may be the only way to actually test it effectively.”

Defining the problem
This points to the need for a better set of definitions up front. “The key thing is to be able to formally define how you want these things to drive, what safe driving means for an automated vehicle in this city—in this country. With that formal definition, you can have a very formally verifiable design,” Weast said. “So now you build that into the design and, from a verification standpoint, you’re matching how the implementation matches the specification, which is a totally different approach than just validation.”

The formal definition of safety has to be the first step. “And it’s completely different from saying ‘I built something and I don’t know if it’s safe because I can’t do any formal verification because it’s all based on AI, which is unexplainable or verifiable. So let me gather statistical evidence that gives me more confidence that what I’ve built is safe,'” Weast said.

It’s probably fair to say the automakers have been competing on safety for 100 years and are only now learning how to collaborate on safety. Most of the collaboration and effort is around building test and validation specifications to could address things like why it’s important to vet the behavior of an ML-driven autopilot application and make sure it is acting safely, even if there is no fault in its hardware and no bugs in its software.

“You could still have an ASIL D, automated vehicle [that satisfies written functional safety requirements for autonomous systems] that crashes into things and people all over the place,” Weast said. “You could have a safety assessment that’s fully conformant with UL 4600. So it’s probably a lot better information than you’re getting elsewhere. But you still may not be able to say that the report is correct.”

More standards ahead
“It is still pretty early days in the validation process for AVs, although there is a lot of talk about NHTSA and standards,” according to Chad Partridge, CEO of road-test simulation and scenario provider Metamoto, which partners with Foretellix to offer customers statistically defensible analysis of the safety performance of their AVs. “But the regulatory Band-Aids are not big things for us, necessarily. Companies are still moving forward with their validation processes and customers are defining their own validation processes based on their own requirements. But they’re all doing things differently. There’s not a lot of standardization.”

Standardization is likely to take a while, but there is a lot of standards work in the pipeline and a growing acceptance that safety validation is a big deal that benefits anyone selling to the auto industry.

“The Society of Automotive Engineers (SAE) Validation and Verification Task Force has kicked off an effort to formally define safety principles as a first step toward an overall test assessment methodology for automated vehicles,” according to Weast, who co-chairs the committee.

—Ed Sperling contributed to this report.



Leave a Reply


(Note: This name will be displayed publicly)