AI and machine learning advancements are evolving along with ADAS development.
Self-driving cars are headed this way, but not for a while. And that’s not a bad thing. As I discuss in my article, “Where Should Auto Sensor Data Be Processed?” there is still much to be worked out just on the technology side, such as how and where to process the significant amount of data coming into the vehicle from the outside world.
Currently, thanks to advancements in AI and machine learning, this technology is being applied to ADAS development today. According to Kurt Shuler, vice president of marketing at ArterisIP said if you look at any of the more advanced ADAS systems, they’re moving away from something that is so strictly rules based to something that is being trained.
Additionally, David Fritz, senior autonomous vehicle SoC leader at Mentor, a Siemens Business pointed out that there will be many different approaches, the vast majority of which will fail. “There’s going to be a sea change in how the different algorithms within the automobile are managed and used. For example, we know that convolution neural networks, or even deep neural networks work pretty well for identifying objects and classifying them. It works well for speech recognition. We’re seeing them in Alexa and Siri and all those things. That is a whole classification of AI that fits well for that kind of application. However, inside the vehicle itself, the decision making AI doesn’t fit well into that. Deciding, based on the context, how much should I brake, or how much should I steer to the left doesn’t fit well into that kind of AI. So you’re going to see different approaches applied to the different kinds of AI that are going to be needed in the car.”
People trying to apply convolution neural networks to the decision making based on conflicting data coming from the sensors is the problem we have today, he stressed. “That’s a problem just about everybody that’s trying to do this is falling into – that it’s just not matching the problem space well. The question there is, ‘How do we determine whether or not a particular approach is working well?’ One way to do this is to put it in a car and start driving it on a road and pay off people you run over. It’s craziness, but that’s how we’re doing it.”
Fritz likens this approach to Monte Carlo simulation or constrained random simulation. “It’s the same,” he asserted. “One of my recent presentations talks about this. It says you could drive the 12 to 15 billion miles and still miss the opportunity to drive over a bridge in San Francisco during an earthquake. If you’re in San Francisco, that’s a big miss because you don’t know how that car is going to behave in that circumstance. The world right now is very heavily biased towards on-road testing, and I think it’s way premature for that. Think about an airplane. They don’t ever fly that plane until it’s been through a very detailed simulation for years. They do the same thing for all kinds of military applications. We do the same thing for smartphones. Why would we not do the same thing for a car? The argument that we get for that is that it has to be very accurate, and if it’s that accurate, then it’s going to be too slow.”
However, companies in the EDA space are working on developing tools to do this very thing (in various permutations) including Mentor/Siemens, Ansys, Cadence and Synopsys, ArterisIP, OneSpin, FlexLogix and many others.
At the end of the day, yes, we will have fully self-driving cars, but let’s focus on getting the technology right, first.
As luck would have it, we’ll have the opportunity to discuss these and other related issues four weeks from today when I moderate this panel at the DriveWorld Conference. I look forward to digging into the real needs of autonomous chip development with experts from across the industry. Hope to see you there.
Leave a Reply