Pivoting Toward Safety-Critical Verification In Cars

Experts at the Table: Changing the automotive mindset; verification after manufacturing; security updates.

popularity

The inclusion of AI chips in automotive and increasingly in avionics has put a spotlight on advanced-node designs that can meet all of the ASIL-D requirements for temperature and stress. How should designers approach this task, particularly when these devices need to last longer than the applications? Semiconductor Engineering sat down to discuss these issues with Kurt Shuler, vice president of marketing at Arteris IP; Frank Schirrmeister, senior group director, solutions marketing at Cadence; Ted Miracco, CEO of Cylynt; Dean Drako, CEO of Drako Motors; Michael Haight, director of business management, Micros, Security & Software Business Unit at Maxim Integrated; Neil Hand, director of marketing for digital verification technologies at Mentor, a Siemens Business; Sergio Marchese, technical marketing manager at OneSpin Solutions; Marc Serughetti, senior director, verification group at Synopsys; and Hagai Arbel, CEO of Vtool. Read part one here, and part two here.

SE: How do we change the mindset in the automotive industry to focus more on safety-critical devices and the verification of them?

Haight: It was a very slow-moving industry for decades, but the pace has picked up substantially in the last 5 to 10 years. The increasing focus and planned migration to electric vehicles and away from combustion engines forces completely new architectures from the ground up. The proliferation and availability of electronic content also drives changes for what is possible.

Marchese: Security is an economics race and you’re never going to be secure. The first switch in terms of mindset, which is particularly relevant in automotive is, how to continually assess the security of your system. How do you continually monitor new vulnerabilities that are discovered across each and every component, or software components of your system? And once discovered, how do you assess the implication on your system from the component to the system level? Also, how do you distribute responsibility across the supply chain? When you build a new thing that is going to be at least partly covered by the automotive standard, it means companies are going to be forced to have incident response plans to manage the responsibility across the supply chain. You’re going to have the system-level tools that are going to help do that. There are new vulnerabilities such as databases, where hardware is being introduced into the CWE (Common Weakness Enumeration) database, which was only for software, represent a big shift not only at the engineering level in terms of thinking about security, but also the organizational level and the supply-chain level.

Schirrmeister: The teams working on this in the industry are painfully aware of it. The first challenge is safety requirements, and in the standardization there are things still missing, so the mindset in a sense is great that people are aware of the safety and security requirements. Will there be a foolproof, by architecture, ‘this can never happen’ type of solution jumping out of it at the end? I’m somewhat skeptical. But the mindset is there. People are aware that safety is the first thing on the list for the zonal aspect. It will be incredibly difficult to verify at a system-of-systems level. You don’t know what you don’t know, and what might happen is you now have several zones coming together, even if they’re properly separated by containerization and you can do certification and all these things on those aspects. The good thing is the mindset and the awareness to address safety and security issues is there. It’s not an oversight or an afterthought. It’s something that happens throughout the process. That’s why, when you look at the tools, one of the first things you have to do is put all these items together in the FMEDA at the front. You need to have something to put it all together to try to address as many items that can clash, and you’re not aware that they could clash. You can put them together, and then test automation, among other things, play into this. The desire is there. The mindset is there. Is it technically easy? No. It’s really hard to do in any type of architecture.

Serughetti: This is the big thing — the mindset that we’re talking about. Safety, security, etc., are not afterthoughts. They are part of the process that starts from the requirements, through the implementation, through the verification. That is the key aspect of the mindset. After this is the tooling you put around it to make sure that things are done, and that they are best practices. But the other thing is, as we have said, security. We go back to the PC example. You get security updates all the time, so we cannot expect that there’s going to be one thing that’s done, and then we are done. It’s a continuous iterative approach from the idea all the way to the deployment of the product. This is what’s going to challenge the industry when we talk about over-the-air updates. How quickly can you make sure that what you are pushing over the air has what you need in terms of safety or security? What’s good enough? And that’s going to be very challenging, because if you look at avionics, when there’s a big problem they’re going to ground the planes and then they’re going to fix them. I can’t see people saying, ‘Everybody stop using their car today until we fix the problem.’ That is not going to happen, and that’s what’s challenging.

SE: It does seem like once the mentality is in place, then it becomes clearer what the technology steps are that need to happen.

Haight: I agree. Many OEMs have announced their plans to be either “all in” or heavily in EV moving forward. Of course, Tesla was only EV from the beginning and came at this from a very different mentality than traditional auto manufacturers. All this is forcing a mentality change.

Hand: I’m going to flip that. There were some interesting things that both Frank and Marc said, and it was once you have the mentality, you’re thinking about this, and then you go and fix it. I would actually go the opposite way. Frank mentioned some of the FMEDA stuff that’s going on. There are various analysis tools that a number of vendors are working on, and the idea of those tools is to change people’s mindset, and mindset is not, ‘I know how this works.’ It’s, ‘What do I need to think about? Can I tell you something does what it is supposed to do?’ Then we switched over several years ago to functional safety, in that, ‘Can it fail the way it’s supposed to?’ But if you stretch that timeline out a couple of years and there’s a whole new set of vulnerabilities. And the idea is, can we help people, whether they do software or hardware, and what are the new things they need to worry about? They’re not going to worry about them by themselves. They’re not going to sit there and think, ‘I don’t know what I don’t know, so therefore I’m going to do this.’ Part of the challenge we face when we look at safety-critical designs is as an industry exposing what are the new challenges. Frank was talking about when you look at functional safety, you look at some of these other things we know that we need to fix, and it’s absolutely true. Go back five years and people would have said, ‘Do I care?’ The answer would have been no. There would have been 10 people in a company who would have said, ‘Absolutely. My life depends on this, and our customers’ lives depends on it.’ Now, every single ASIC designer in that company would say, ‘Yes.’

Schirrmeister: This is a fluid process. There are always new challenges being added. Is it complete or is it sufficient to do that? No, it’s like design for test. We have been working on design for test for decades. This is design for safety, and the requirements will change over time, there’s new stuff piled on all the time, but the desire is there to fix it appropriately. It’s no longer an afterthought; people are building it in like design for test.

Hand: I agree with you that it’s not an afterthought anymore. People know about this. The interesting thing is that the pace — whether it be vulnerabilities, attacks zones, failure modes, pick any one of them — is evolving so much quicker than we’ve dealt with in design for functionality. That really hasn’t changed. You build something, it works. You test it, it works. You test the ways it doesn’t work. The world’s a happy place. When it comes to resilience, you could start a project. The architect today, if you’re looking at automotive, or you’re looking at mil/aero, it’s a multi-year project. The difference is the speed of innovation changes. What was the state-of-the-art when a project started is going to change so quickly during it. This eventually will settle down, just like the state-of-the-art of functional verification settled down. Right now, when it comes to safety criticality, whether it be security, whether it be functional safety, who knows what the next one will be, and it’s not just automotive. It’s industrial, it’s aerospace, and even data centers are now worrying about this. The pace of change is so much faster than we’ve seen in the past. So you may start a project saying you fully understand the state-of-the-art of vulnerabilities, but you get to the end of it and you say, ‘Oh no, there have been 15 new zero-day vulnerabilities because we’re running four OSes in containers and it’s a mess.’

SE: To wrap up, how far does verification go? Does it stop at manufacturing?

Hand: Verification never stops. It doesn’t stop when you tape out. People keep testing. It doesn’t stop when you go manufacturing. It’s continual learning, and you feed it back to the next generation.

Marchese: Verification actually increases after the product is deployed, because you get a lot of security researchers and people trying to break your products to find bugs and flaws that the developers didn’t find. So you get a lot more eyes on your product in that sense. What changes is then you will need to, in a sense, keep this flow and use it for your incident response to assess the impact on a certain vulnerability, to access the fix. And you need to make sure that you are releasing an over-the-air update that is fixing the problem, not creating five more.

Schirrmeister: It’s all one big circle. At the end of the day, the engines that provide you with the data to confirm or deny that you are safe and secure and verified include silicon, the system, and the systems of systems. We can’t stop the car driving and ground a fleet of cars. It’s fixed over time with over-the-air updates, or it’s improved in the next version, and you put workarounds in it. But it will continue. Chip, system, and system of systems — all this data is being linked back. That’s why we are all so eager to find central places to put all the requirements, track them over time, and put the debug data next to it at the end.

Arbel: Verification never stops because it’s the actual definition of the product. Your product is only as functional as it can be verified, and now it’s only as safe as you can prove your safety. It’s so complex, and it’s getting so complex with safety and security that the actual ability to define the product is your ability to verify and to secure it.

Haight: This is a good question. Earlier in my career, I was a semiconductor IC designer, and my team and I were always asking ourselves that same question. Verification usually takes some multiple of time longer than the actual design, and creates an iteration loop to fix an issue, sometimes only to create one or more unintended issues when fixing another. If a good enough specification is written, then the verification by the book is certainly a starting point. But it is important to have an independent verification team spend time just playing with the system trying to get it to break under normal usage. Having the designers do verification themselves is dangerous because they come with their own biases on how they interpret the design spec. At the end of the day, with complex systems like this, there should be a combination of verification by the book and manufacturers doing their own unscripted testing.

Miracco: Stopping verification at the design phase is a hardware-centric mindset. Verification should definitely continue post-manufacturing, and into the lifecycle of the product. As defects and vulnerabilities are discovered they can either be patched with software, or factored into the next generation design cycle. Designs should be completed with the possibility of implementing patches and code updates, as this will also prolong the lifecycle of the product enhancing value and security.

Related Stories
Why Safety-Critical Verification Is So Difficult
Experts at the Table: Proprietary hardware makes software development more difficult; how to deal with over-the-air updates.
Variables Complicate Safety-Critical Device Verification
Experts at the Table: What’s the best way to approach designs like AI chips for automotive that can stand the test of time?



1 comments

Bert Templeton says:

Silicon content is increasing in ICE vehicles and the rapid growth of BEV, PHEV, and FCEV vehicles will drive growth even faster.
The automotive industry has always been concerned about safety and reliability. With the interconnectivity of 5G and IoT into motor vehicles, safety will also include security from random or malicious attacks.
Beyond the design process and security, the reliability and safety of this silicon will have to be tested and proven in final manufacturing. It may result in very slightly higher costs, but the benefits far outweigh the potential problems of electronic system failure in an EV or any type of autonomous vehicle.

Leave a Reply


(Note: This name will be displayed publicly)