Safety, Security, And Reliability Of AI In Autos

Will people trust vehicles enough to take their hands off the steering wheel?

popularity

Experts at the Table: Semiconductor Engineering sat down to talk about security, aging, and safety in automotive AI systems, with Geoff Tate, CEO of Flex Logix; Veerbhan Kheterpal, CEO of Quadric; Steve Teig, CEO of Perceive; and Kurt Busch, CEO of Syntiant. What follows are excerpts of that conversation, which was held in front of a live audience at DesignCon. Part one of this discussion is here. Part two is here.

SE: AI in automotive can be a security risk. How do we detect breaches? Through the chip? The software?

Kheterpal: It’s usually a combination of both. In these systems you have an AI model that can do packet inspection, for example, and detect intrusions. And then you have another system where a human can make a decision about whether this a false flag or not. But automotive and security are getting a lot of attention. The dollars going into R&D, and eventually deployment, have been increasing rapidly over the past few years. In automotive, there are billions of lines of code that need to be updated every few months to keep up with new software new models. For that you have to secure the entire system.

Busch: Security is a concern. You don’t want to be able to hack the software in a self-driving car. And security is always a moving target. There’s a reason why we have new security companies popping up all the time. To say a new chip is naturally secure and there’s no way to hack it might be true for 24 hours. With any security you have to think about what happens if someone hacks it. What do I do? I don’t believe anything is 100% secure, and someone will figure out something some way to hack it, and then we’ll move on to the next security thing and the next security thing. But in automotive it can really affect the user experience, because it only takes a couple of these breaches to show up with self-driving cars. In automotive, security needs to be rock solid.

Teig: I agree. Absent some quantum scheme, which might end up actually being secure, achieving perfect security isn’t likely. But my perspective always has been that there are three levels of security you should care about — none, a really bright MIT grad student who can hack into your system successfully, or a government. Stopping the third one is really hard, because there’s a lot of compute horsepower out there. For our chips, everything that goes on the chip is local — all the network models, all the software, and everything is encrypted. So we’re at least taking a strong shot at it. That said, if a government felt like hacking into our chips, they could put a lot of resources into that.

Tate: The data is kind of encrypted anyway, with all the quantization. But hacking an x86 processor is one thing. It’s a very well understood architecture with a ton of documentation and it’s talking to the outside world. The inference chips that we’re talking about are buried inside the car, and every inference chip has a different architecture. These architectures are not openly published. Try going on anybody’s website and learning anything about any level of detail of the internal architecture. It’s far more opaque. So the security concerns that car companies will be focused on are x86 and RISC-V processors first, because they’re the most open.

SE: Do we have any idea how these devices will age? The big challenge for AI developers has been getting these chips to just work. What will happen 10 years from now?

Tate: That’s a huge concern for car companies, because they have different levels of reliability. If some system fails can cause people to die, it requires far higher levels of reliability than the entertainment system in the car. You can have metal line migration that pits that die, especially with things like flash. But there’s some kind of ECC everywhere, and there are additional processors backing up the most critical processors. The front-facing camera is always going to be double checked by a second processor running a similar, but not quite the same version of the model, just in case as bugs. It’s a huge issue, and everything that gets older gets less reliable. You have to design cars to work for 10 years. I’m driving a 20-year-old car right now, because it doesn’t have very much intelligence. But cars are going to be around for years, and you have to engineer these things in a way that you don’t have to do when you’re dealing with most other customers.

Busch: They’re going to age poorly. We’re making cars into computers, and most people aren’t still using tablets or laptops that are 10 years old.

Kheterpal: I was driving a 16-year-old car up until December, when I lost the catalytic converter, but the electronics and digital processors worked fine for 16 years. If you build AI chips and machine learning chips that are purely digital, and have them manufactured in the same fab as other processors and microcontrollers developed there today, the hardware will be fine. But the software, and the ability to run the kinds of models you want to run 10 or 20 years from now, will depend on how general-purpose are those AI processors. Are the convolutions too specific? The matrix multiply will still work. But when you implement chips that go into analog or neuromorphic applications, then the question becomes more relevant because those things aren’t tested in the field like all the digital chips.

SE: The x86 architecture has been tested across billions of units, but it still had problems. Now we’re dealing with car companies potentially designing their own highly customized chips in much smaller batches. Does that make it harder?

Tate: The car companies have much higher levels of safety standards and reliability procedures than a PC processor. The volume is 100 million cars, and there will not be as many processors as there are in PCs. But it’s still significant volume, and the car folks take the whole safety issue extremely seriously. There is potential liability. If you work with these companies, you’ve got to do all sorts of calculations at different temperatures and build in error correction to get to some of the higher safety levels. So they do it right.

Busch: Designing for automotive does bind you somewhat, but it also gives you a much more reliable product. It’s just another vector we need to take into account when we’re building these chips. But there also are some things, like really high-performance analog, that are very hard to do.

SE: One last question. Do you feel safer or less safe with AI in a vehicle?

Tate: Safer, as long as it’s driver-assist. I’m not sure I want to go to a fully autonomous car.

Bush: I agree. Having AI in the car definitely makes it safer, as long as I’m still driving the car.

Kheterpal: Safer, for sure.

Teig: Likewise. The sensors don’t get tired and are looking at everything 24/7.

Related
Where And Why AI Makes Sense In Cars (part 1 of above roundtable)
True autonomy won’t happen quickly, but there are a lot of useful features that will be adopted in the interim.
AI’s Impact In Automobiles Remains Uncertain (part 2 of above roundtable)
Software updates, user experience, and reliability all factor into how successful AI will be in cars.



Leave a Reply


(Note: This name will be displayed publicly)