The Uncertainty Of Certifying AI For Automotive

Making sure systems work as expected, both individually and together, remains challenging. In many cases, standards are vague or don’t apply to the latest technology.

popularity

Nearly every new vehicle sold uses AI to make some decisions, but so far there is no consistency in what is being developed, where it is being used, and whether it is compatible with other vehicles on the road.

This fragmentation is partially due to the fact that AI is still a nascent technology, and cars and trucks sold today may be significantly different than those that will be sold several model generations in the future. That makes it difficult to create standards because no one knows yet how this technology will evolve. It’s also partially due to the fact that new autonomous features are highly competitive, and carmakers and their suppliers are working in secret to bring the latest technology to market.

As a result, while carmakers typically adhere to standards such as ISO 26262, ASIL A-D, and AEC-Q100,  there is a lot of technology that falls outside of those standards. And because AI is being used in many applications within the car, there will be different AI algorithms and AI graphs used depending on the specific application.

“Most of us know that the safety-critical ADAS applications are AI-based, and that’s when you’re doing automatic emergency braking or lane-keeping or adaptive cruise control,” said Ron DiGiuseppe, automotive IP segment manager at Synopsys. “But there are other applications in the car that many people don’t realize are also AI-based, such as the power train, and in electric vehicles, the electric motors have lots of sensors. Managing the electric motors can be an AI application. There are various benefits to having AI manage electric vehicles, and also do some predictive analytics for reducing hardware costs and removing some of those internal sensors in the electric motor and powertrain by using AI. Infotainment, while a separate application, uses AI differently, such as in a driver monitoring system that uses images from cameras to make sure the driver is awake. The AI then has to interpret if the driver is alert.”

While the auto industry for decades has utilized certifications and compliance testing, this kind of standardization hasn’t happened yet for AI.

“We cannot talk about compliance, as no standards/regulations yet exist for AI,” said Riccardo Vincelli, director of engineering for high-performance computing at Renesas Electronics. “We can only talk today about ‘suitability’ for the target application. In the case that AI systems are employed into non-safety applications, such as speech recognition, the challenge is mainly to have functions that can fully satisfy customer expectations. But for safety applications like automated driving, the challenge is big, and we need to create defensible arguments why solutions based on AI are considered to be sufficiently safe. This is still a big challenge, and effort is being spent to reach this target. In fact, I cannot say that today there are applications in the field based on AI systems that can be considered safe unless this AI is used together with functions based on conventional technology.”

This hasn’t slowed down the pace of AI development and deployment in vehicles. But to bring this new technology to market, algorithms need to be trained in a vehicle under real workloads, which can vary greatly depending on the type of inferencing chips or accelerators.

“This training happens offline,” said David Fritz, vice president of hybrid and virtual systems at Siemens EDA, “and represents itself into these neural networks, of which there might be many, and they are the same as non-automotive applications. The process is the same, even though the inputs are different. The main point is, the results of that training are neural network configurations and weightings, and the results of that training are just like any other software — it still needs to run on a piece of hardware. That hardware could be an NPU, GPU, CPU, or a DSP. Anything that does the AI inferencing is like software running on hardware. In terms of certifying that for ASIL-D or ISO 26262 it’s the same. You want to inject faults into the hardware that’s actually performing the inferencing. You want to put false input data into the inferencing.”

Murky standards, but more of them
Where standards do exist, they tend to be very broad. ISO 21434 is a case in point. “Recently, a lot of those requirements have been flowing down to the semiconductor companies,” said Jason Oberg, chief technology officer at Cycuity. “It’s a whole process of threat analysis and risk assessment (TARA) that says you need to go through and build out this big spreadsheet that documents all of your security requirements, which includes how you actually validated and verified that you’ve met the security requirements and the supporting data. That’s a process that the semiconductor companies are having to go through right now. We fit into that because they have to verify the security requirements, and make sure they provide the right evidence. Given that ISO 21434 is fairly general, if it’s a new functioning of the chip, whether it’s something simple or an actual ADAS AI-type use case, they’re going to have to go through that same type of certification, and they’re going to have to document the security requirements, provide evidence, and so on.”

While IP vendors work to have their products ISO 21434-certified, SoC developers are getting entire SoCs automotive certified. “This is something where we see some companies being more proactive, because they see it as a competitive advantage,” Oberg said. “If automotive is a big market for them, they certify their products. And whether it’s an IP vendor or not, that activity is ramping up. It’s not at the point where it’s being forced, but there are a lot of folks trying to get ahead of it because they know it’s going to be mandated at some point.”

At present, many of the standards applied to AI have been in place since before AI was as ubiquitous or as well trained as it is today for a variety of applications. As a result, while chipmakers still need to prove their devices will behave reliably and within spec, the compliance testing tends to be more general. Depending on what the AI is controlling, those tests can be extremely rigorous, but they may not pick up on all the nuances of how the AI will behave on the road.

“AI plays a significant role in the automotive industry,” said Amit Kumar, product marketing director for vision, AI, radar, lidar and DSP cores at Cadence. “One would think that AI gets implemented at the vehicle level only, but AI plays a significant role in designing vehicles at the lab level and gets implemented at the design level, the production/factory level, QA and testing levels, and for predictive maintenance. Then it eventually reaches in-vehicle, which needs to operate seamlessly and within the parameters of standards like ISO 26262 (vehicle standard), SOTIF (safety of intended functionality), and so forth. These machines trained with AI algorithms need to safely perform tasks that previously required an experienced assembly team on the production floor, and eventually an experienced driver to operate a vehicle in an on-road traffic environment and to perform driving maneuvers better than an experienced driver with full safety.”

Key steps and considerations for implementing AI into the safety and reliability in an automotive application include the understanding that safety is paramount in automotive applications, Kumar explained. “One needs to ensure that their AI systems meet safety requirements. Certification bodies like Underwriters Laboratories (UL) provide safety training for autonomous vehicles and include machine learning safety. Risk assessments are crucial. Predicting potential hazards and mitigating them is a key function of AI applied into perception systems, path planning and motion control. Companies like Tesla use Hydranets, which are used on many images coming from a vehicle perception sensor suite and sent to a single backbone and further re-distributed onto multiple network heads, each responsible for performing functions like object detection, traffic lights, lane markings, etc. These networks are then fused onto a transformer to perform either a spatial fusion or a temporal fusion. These Hydranets and the platform they are running are thoroughly designed keeping functional safety standards (FuSA) in their design architecture.”

And even before vehicles are manufactured, the machines responsible for manufacturing the vehicles, are trained. Here, ML algorithms and AI play a crucial role. “When it comes to design and manufacturing, AI plays a role in vehicle manufacturing,” Kumar said. “AI-powered solutions and ML algorithms are used to improve production processes, as well as to speed up data classification during risk assessments and vehicle damage evaluations. Here, technologies like computer vision and NLP are widely applied in manufacturing. As well, collaborative robots can handle critical tasks like material handling and inspections in a safe environment and with efficiency.”

Processes and procedures
To ensure automotive safety and security compliance, the various applications in an automotive system have to be broken down by function. “Something like ADAS is obviously safety critical, so AI applications for ADAS would have a different level of safety criticality than the infotainment generative AI where you’re talking to the car to turn on the radio, change the temperature, or make a phone call,” said Synopsys’ DiGiuseppe. “That has a different level of safety criticality, so the safety is application-based.”

Once the risk is defined, the application safety integrity level (ASIL) rating is determined. “Different applications have different ASIL safety levels, depending on the risk,” he said. “The risk is composed of, if a failure happens, what would be the severity of that failure? For instance, if a failure happens in the radio, generally that’s not considered a high severity type of failure, while an ADAS failure is. So there are different classes of severity. There are also different classes of probabilities. What is the probability that a failure would happen in the ADAS system? What are the types of failures? That leads to the consideration of what that ASIL target is. You look at the severity of a possible failure, and the probability of that failure happening. That helps you decide what your safety integrity level is.”

Then, when the ASIL level is decided, there is a system-level challenge to break down the hardware and software components of the system, and safety assessments are done to hit those target ASIL levels.

“If it’s an AI-based application — let’s say, ADAS with high levels of possible severity — it could be life critical if the ADAS system fails,” DiGiuseppe said. “That has high possible severity. The automakers break down the system to their suppliers, and in the case of an ADAS module that one of the big Tier Ones supplied to the OEM, within that ADAS module is the ADAS semiconductor processors. You break down the system into its baseline components, and in the semiconductor chips where the ADAS processors are composed of different IP, you’re breaking down from a system to all of its component parts, including all the way down to the sub-IP functions like the AI accelerators in those ADAS processors. You break it down to all the component parts and you have to have an ASIL assessment roll up, from the top to the bottom, and bottom to top. That’s what all of these supply chains need to do, and each supplier in the supply chain rolls up that safety information to the next higher level. The IP supplier provides the safety work products/safety assessments to semiconductor customers. Then the semiconductor company does that on an SoC level, provides it to the module supplier, the module supplier will do it on the whole module system, and then the automaker will do it on the whole application. In breaking down the systems, you have both the software components, so the AI component of software, as well as the hardware component, and you have to do both the hardware and the software safety assessments.”

The level of security is likewise determined by the risk, and it can be equally stringent. But with security, that kind of testing may involve multiple systems rather than just focusing on a particular function.

“Security is all about re-verifying along the way,” said Cycuity’s Oberg. “There’s a fundamental limitation in security where it’s not composable, meaning you can’t verify just the hardware, then just the software, and assume that once they are together it’s going to be secure. You actually have to do the hardware, then you have to do the software, and then the third part is them together. With automotive, you have to start at the beginning, make sure the IP is behaving securely, ensure the IP integrated into the system is secure, ensure the software that’s running on your system is secure. And all of this is interacting together, so as you get into the software domain, that’s why emulation is really important. You have to run your actual firmware, your actual boot image, with the real hardware and ensure that everything that was specified in your TARA (threat assessment and remediation analysis), for example, is not being violated now. In a typical semiconductor company, they’re going to do a lot of block-level analysis, and need to make sure things are validated and working there. Then they’re going to build the SoC and maybe have ‘topple the SoC’ tests, to make sure everything is being validated.”

System-level concerns
That’s only part of the challenge. “Ultimately, you’re going to run software on that, and you need this consistency across that whole lifecycle,” Oberg said. “That’s where it becomes really important. Where it gets challenging is once the silicon ships and someone’s actually putting their own software on it, then it becomes more fragmented and a little scarier, but that’s just the reality. Companies that are fully vertically integrated, like Tesla on the car side, control a lot of the chip design even though they buy third-party silicon. But they also build their own so they can control that whole stack, just like Apple can with their phones and tablets, and so on. It becomes more challenging as it gets more fragmented.”

There are further considerations with the hardware and software. “The aim of ISO 26262 is to guarantee the absence of unacceptable risks caused by random hardware faults (relevant just for hardware) and systematic faults (relevant for both hardware and software), whereas the aim of AEC-Q100 is to guarantee a minimum level of quality/reliability for hardware components,” Renesas’ Vincelli explained. “As such, for hardware components used to execute AI functions as an SoC, ISO 26262 and AEC-Q100 are still fully relevant and applicable. There is no need to change with respect to what was done already for hardware components not based on AI. Then, for software components involved in AI functions, it may not be possible to always apply or comply to ISO 26262 because ISO 26262 was created for traditional deterministic software, developed based on a V cycle, while the AI applications have a probabilistic nature and are trained to perform the required functions through examples. Hence, a quite different approach.”

Rather than making AI compliant with ISO 26262, Vincelli believes there is a need to extend it by considering an additional set of methods and techniques to address desired safety proprieties of AI applications, or by mandating a certain way to develop AI applications that allows their review. “Since AI systems are data-dependent, a small but not foreseen change in the environment where the AI application is operating could cause safety issues, because it is not known how the AI application will behave,” Vincelli said. “ISO 26262 could be extended by considering how to deal with the impact and severity of such unforeseeable scenarios that could lead to unacceptable risk.”

And because AI is a relatively new and fast-growing topic, existing standards like ISO 26262 are not considering AI technologies yet. “Other standards like the ISO PAS 8800, expected to be published in the middle of this year, have taken the task to provide an automotive-specific guidance on the use of AI technologies,” he said. “A possible direction is for the ISO committee to extend ISO 26262 in the next release by incorporating lessons learned with ISO PAS 8800, with potentially also normative requirements.”

Further, several additional initiatives like The Autonomous, Ground Vehicle Artificial Intelligence (GVAI) committee from SAE, SAFEXPLAIN and others are forming with the goal to identify ways to make AI systems safe by creating techniques and methods to develop and enable review of these AI systems.

Conclusion
Specific approaches and methodologies for achieving compliance with automotive safety and security standards are not fully baked when it comes to AI. That will take time, and it will require cooperation among automotive companies, as well as by different teams within those companies.

“You have the functional safety team that understands, ‘I’m going to inject a stuck-at fault. Did it recover properly?'” said Fritz. “Then we have the SOTIF (safety of the intended functionality) team, and that one is a little bit different. Then, what I like to see is a third validation team that is responsible for all of these different system-level scenarios that collect those. And once all the other teams have done their parts, the scenario team says, ‘Okay, I have 10,000 scenarios I’m going to run tonight. All of them are corner cases. You passed them last week. Do you pass them still?’ The point about those is, they are the only ones that are system-wide. Does the system itself behave as it did before, or does it behave correctly, where all the others are very unit-based, segregated, siloed, and have no understanding of what’s happening elsewhere throughout the system or its impact on what you’re doing?”

While all OEMs do not have all three of those teams in place today, Fritz notes that it’s still a work in progress. “Currently, the Tier Twos that are producing silicon will do ASIL-D testing and say, ‘done.’ Those devices will go to the Tier One supplier and they’ll say, ‘Okay, we got our software going, we did ISO 26262, we are done.’ Then it goes to the OEM and no one knows what’s going to happen when you plug all of these hundreds of pieces together. In fact, the concept of software-defined vehicle, the concepts of virtualization, digital twins and all of that, the whole shift left paradigm is really all about those processes becoming part of a holistic methodology so that this can all be done not at the end, in what we call the integration storm, but continuously.”

But this requires continuous integration, development, and iteration, and it’s up to the OEM to orchestrate it all. “They’re just not ready,” he said. “Most don’t even understand it. They are having trouble figuring out why, when they had thousands and thousands of hours of testing of their software for their EV, it still doesn’t work. What’s needed is a methodology that comprehends that whole process, from exploring the architectures, tossing out those that stink, what the software team is doing, and how they’re impacting the hardware team. All of that iterates until you get something that works in the end, and it’s all verified against the physical platform. That’s the solution. The automotive world isn’t ready for that just yet, but they’re beginning to at least adopt fads that are pointing in that direction.”

Related Reading
Ensuring Functional Safety For Automotive AI Processors
Creating an AI accelerator that complies with ISO 26262 ASIL-B specifications.
Automotive AI Hardware: A New Breed
What sets automotive apart from the conventional wisdom on AI hardware markets.
Automotive, AI Drive Big Changes In Test
DFT strategies are becoming intertwined with design strategies at the beginning of the design process.



1 comments

Umit says:

One of the best and insightful “AI on Wheels” articles I have read this year !

Leave a Reply


(Note: This name will be displayed publicly)