There is no right answer yet for autonomous vehicles, and that’s causing problems.
The infusion of more semiconductor content into cars is raising the bar on reliability and changing the way chips are designed, verified and tested, but it also is raising a lot of questions about whether companies are on the right track at any point in time.
Concerns about liability are rampant with autonomous and assisted driving, so standards are being rolled out well in advance of the technology. And due to the long design and implementation cycle in automobiles, some of these standards are literally changing before new technologies are being added into cars. That, in turn, has pushed chipmakers to ratchet up reliability so they can withstand any of the forthcoming changes in standards.
“There are two parts to this,” said Lee Harrison, automotive IC test marketing manager at Mentor, a Siemens Business. “The first is linked to how we have been doing test all along, where you have traditional manufacturing testing. What’s different there is the test qualification requirements are higher. The second is functional safety, where IC manufacturers have to test a device while it’s in operation. That is new for a lot of customers. When you talk about functional safety, all of the mechanisms to test for safety have to be there in the vehicle all the time.”
The challenge in both cases is how to stay current with an evolving set of standards. ISO 26262 has been in transition across the industry to address more sophisticated ECUs and the control software running on them. In addition, ISO 21448 (Safety of the Intended Functionality, or SOTIF) has been developed by ISO to make sure that smart devices do not make bad decisions. ISO 21448 also includes language requiring that it look for and prevent damage from unknown risks—a requirement that nearly got it sent back to the drawing board by ISO members, who are comfortable with the idea of components that self-correct when a bit is flipped by cosmic rays or other random occurrences, but thought it was too vague to include responsibility for unknown unknowns.
To make matters even more confusing, not all carmakers are moving at the same pace. Some car companies are pushing ahead to roll out full autonomy as soon as possible, while others are holding back until the entire automotive ecosystem is ready.
“What’s at the core of a lot of this is how hard it is to get to full autonomy with all the different sensors, where you have to validate the behavior without a known ‘right decision’ to compare it to,” said Jeff Phillips, chief marketing for automotive solutions at National Instruments. “Knowing the correct answer has been the predominant methodology in test, and you’re verifying that you get the right one. In the market, we’re seeing companies are trying to continue investing in the way they need to invest, while consumer uncertainty is driving down the volume of cars being purchased. That is causing a lot of the companies investing in these technologies to cut costs in other areas in order to fund the high growth areas. And even within autonomy itself, maybe having driving assistance is enough to improve the safety of the car and we don’t need full autonomy. Meanwhile, other companies are jumping straight from Level 1 and 2 to Level 4.”
Some car companies are taking on both approaches. Toyota, for example, has “Guardian” and “Chauffeur” modes for assisted and autonomous driving, respectively. And Tesla offers autonomous driving as an option, which can be updated as the technology improves.
Standards will continue to evolve alongside the technology. But how all of this will work together is far from complete. SOTIF and ISO 26262 are just the beginning. Standards are in transition across a broad swath of the industry. The Society of Automotive Engineers (SAE), for example, published a terminology standard for automated vehicle testing in June. It also published a study in June describing the “unsettled technologies” being used for autonomous vehicle safety verification — which are inadequate and potentially disastrous. And SAE warned that the industry needs to take “a major leap forward in the validation of these ADS (automated driving systems) technologies” or risk losing or delaying the most immediate benefits of automated vehicles.
The pure complexity of the system-of-system-of-systems that defines AVs also introduces packaging and component integration issues that may require more rigorous direct inspection, especially to ferret out erroneous sizing, positioning or excess gaps that may not be detectable by other means, according to Subodh Kulkarni, president and CEO at CyberOptics.
“I’m a little skeptical about the extent of Internet of things appliances and connectivity, because as much as I would love to have a refrigerator be intelligent, I don’t see much advantage,” Kulkarni said. “5G is a big driver of packaging, but autonomous vehicles are big rivals, as well, because the drivers come down to functions that rely on advanced packaging and assembly that don’t let you get under the covers in an automated way. So you see people putting together elegant pieces of equipment together, and then finding they need operators to use scopes or visual light to inspect them and make sure everything is okay.”
Different test strategies
It’s still not clear whether existing test approaches will work well enough for standards that are still evolving, or which may be rolled out in coming years. One of the big problems is that companies are developing autonomous and assisted driving capabilities for competitive differentiation.
“There are pockets within automotive where we are seeing progress because we have seen commitment to standards,” said NI’s Phillips. “The recent release from the 3GPP set some standards that the automotive market has started to converge around. And 5GAA — the 5G for automotive applications committee — has started to suck in some of the ancillary bodies and become the trusted standard for how the communications protocols will be defined for the vehicle. That will standardize how cars from different car manufacturers that are on different platforms can communicate information to each other.”
That makes testing a relatively straightforward process in that particular area. But in the absence of those kinds of industrywide standards, testing becomes much more localized.
“Right now everyone is trying to isolate and test the individual pieces,” Phillips said. “If you think about an autonomy platform, you would try to validate that the sensors are capturing the right data. You can isolate the software from that and do a physical measurement or electronic test and validation. But then when you plug in the software, what we’re seeing is that most companies are relying on a set of pre-configured scenarios they’re testing against, and then using simulation to broaden that to the whole scenario set. Take radar, for example. Most companies test radar at 1 foot, 5 feet, 10 feet, 20 feet. But you need to measure an infinite set of points along that, rather than these pre-configured distances, because the environment has to be set up in a lab and you can’t alter everything you’re trying to capture. Hopefully we’re on a good enough set of finite variables, because we’re trusting simulation for the rest.”
Flexibility in test is critical, particularly where the standards and technology are still evolving.
“The big challenge is that you may need tests in the future that you didn’t think of when you added test capabilities, so the test has to be modifiable” said Mentor’s Harrison. “We’re seeing more requirements to run test in the car while the car is operating. That’s creating design challenges now. When do you run the test, how often and how does that impact the vehicle? If you rely on a navigation device in a truck, for example, it has to be tested to make sure there are no faults. That can be taken offline for 100 milliseconds or so to make sure that any deviation does not cause a problem.”
Harrison noted there are three phases for automotive testing — power- or key-on, in-system test, and key-off test. The power-on test happens when a vehicle is started up. The in-line testing happens whenever it can do so safely, and it picks up wherever it left off if that testing needs to be stopped. The key-off test can happen anytime the car is off, and it can be used to run deep algorithms in memory or logic test.
Automakers are making progress in all of these areas, but there is still no way to compare one against another or standardize on one best-practice approach for a large subset of the main functions, according to Roger Lanctot, director of the automotive connected mobility practice at Strategy Analytics.
“If you test drive a few cars with lane-keeping functions, you notice some are a little more aggressive if you are one of those people who don’t use your blinker to show that you’re changing lanes,” Lanctot said. “Some will resist, or light up a signal on the dash; with others, even if there is plenty of room around you, they will shove you over and you’re fighting for control of the wheel.”
Others agree. “What’s needed is a paradigm shift in how we approach this,” said Roy Fridman, vice president of business development at Foretellix. “We need to stop counting miles because that is not the right metric to say whether your car is safe or not. You need a way to define or understand what part of the scenario space you cover. Of all the huge number of scenarios, situations an autonomous vehicle can encounter, you have to know how many you have examined. Basically, of that number of scenarios you cover, you have to extract safety metrics to say ‘this metric was 50% full, and that metric was 70% full. That will give you a definable way to quantify where you are now, and you can carry that forward to the next step as you try to improve, and continue until the ultimate step when you decide you are all done. Right now, there’s no way to know when you are done testing, when it is safe.”
New safety standards from ISO and SAE eventually may give developers a clear view into the decision-making of a machine-learning application, but a retroactive diagnosis is less efficient and likely a lot less effective than designing the application correctly in the first place, said Jack Weast senior principal engineer at Intel.
Data everywhere
Alongside test, analytics has made big inroads into the automotive world because it can identify patterns in reliability across large data sets that are undetectable by the human eye.
Traditional cars are considered safe if they comply with the Federal Motor Vehicle Safety Standards (FMVSS), which includes both a list of required safety features and descriptions of how they’re supposed to work. They have never been updated to include any but the most basic electronic functions, however, meaning there are no U.S. federal rules defining what equipment is required, or how a particular feature should operate in order to be considered safe.
Better data, coupled with machine learning, go a long way toward filling in the gaps and still being able to modify search criteria as necessary over time.
“Data runs all the to traceability of the supply chain,” said Uzi Baruch, general manager of OptimalPlus’ Electronics Division. “It’s not just the Tier 1s. It’s also what they’re getting from their suppliers. It gives inside information about which suppliers work best, because some components may be produced in the United States, some in Germany, and some in China. You can use data to compare measurements.”
But all of this data carries a price.
“In automotive in the past, we had to hold data for 10 to 15 years,” said John O’Donnell, CEO of yieldHUB. “Now it may be as long as 18 years. The only way to do that is in the cloud, where you have 99.999999999% reliability.”
Leave a Reply