True autonomy won’t happen quickly, but there are a lot of useful features that will be adopted in the interim.
Experts at the Table: Semiconductor Engineering sat down to talk about where AI makes sense in automotive and what are the main challenges, with Geoff Tate, CEO of Flex Logix; Veerbhan Kheterpal, CEO of Quadric; Steve Teig, CEO of Perceive; and Kurt Busch, CEO of Syntiant. What follows are excerpts of that conversation, which were held in front of a live audience at DesignCon. Part two of this discussion is here. Part three is here.
SE: Where is AI being used today, and where will it be useful in the future?
Teig: It’s certainly being used today for image processing of various sorts — object recognition, object detection, and segmentation — and increasingly in language processing and audio processing and other modalities. We have ridiculously powerful pattern-matching technology has applications in almost everything, because pattern matching shows up in almost anything that you use the technology for. So there’s still a lot of room for this technology to be applied — at the edge, in the cloud, and everywhere in between.
Kheterpal: I just drove a car that has quite a bit of image processing technology in it, which is AI-based. The chips and software for these vehicles was put in place five years ago, and it’s now in vehicles. That includes assisted braking and lane-keeping. Those features are using active AI. In addition, language models are getting huge, and they are doing more and more tasks that humans typically are doing. We will see generative AI images become mainstream in the near future. But looking far out on the horizon, it’s hard to predict.
Busch: AI, and specifically deep learning, are the ultimate analog-to-digital converter. People ask, ‘What is it good for?’ It’s good for bridging the real world and the digital world. So investors like to say it’s going everywhere. If you look at where compute has gone, they may be right. When the microprocessor or microcontroller was built, people thought there really weren’t that many places it could be used. Today, there are something like 25 billion microcontrollers shipped every year. They’re in everything. They’re even in greeting cards. AI is going to be in pretty much everything we see. Today, it’s largely confined to things with lots of compute. But over the years, we’ve seen compute move from something that’s plugged into the main power source in a data center to something you can carry in your pocket. That’s what we can expect with AI. It will allow us to classify lots of things — many more things that we use as humans today.
Tate: For automobiles and industrial applications, one of the big use cases will be preventive maintenance — detecting anomalous patterns of vibrations and noises coming from engines and other equipment and sensing when something’s changed.
Teig: The early and middle phases of AI technology so far have really been focused on trying to duplicate things humans can do in gadgets. The opportunity is to do things humans can’t do. That’s what’s going to be the exciting part, like predicting when a failure is going to occur, using modalities and styles of driving and interacting with other vehicles that no human can possibly do. These are more interesting in some ways that just lane-keeping. It’s being aware of all the traffic on all roads and what choices you might make. That avenue hasn’t been richly explored yet.
SE: This is very different from what the initial predictions were for AI, which is that you would push a button or tell a car where you want to go, and then you would sit back and do something else. In some projections, there isn’t even a steering wheel. Are we anywhere close to that?
Busch: Hopefully not, because I really like driving my car.
Tate: There are a lot of legal issues here. The reality is that if you want to deploy a fully autonomous car, it probably has to be better than the best driver. Your 16-year-old may be the worst driver on the road, but car companies will be held to a much higher standard.
Kheterpal: Full self-driving has always been something that’s coming in the future. It was supposed to be here two years ago, and it’s supposed to be here three years from now. It’s going to take more time. When we say full self-driving, with no steering wheel, it’s going to be at least 10 to 15 years from today, based on the companies we talk with and the kinds of chipsets they want to drive those algorithms. What is coming soon are assisting features, which will create safer vehicles with limited self-driving at Level 3. Those features are being rolled out now.
Teig: There’s good news and bad news. The good news is the technology we already have. Tesla provides enough to enable, more or less, self-driving on the highway under somewhat restricted circumstances. And it’s a lot better — despite its limitations — than what you might have imagined just a few years ago. The bad news is this is very far from general-purpose, all-situations autonomous driving because the AI models we have are so fragile and are based more on folklore than science. Presumably, some team will engineer the heck out of autonomous driving and gather so much data, engineering one use case after another, that by the end of the decade there will be fully autonomous driving. But it will be a large collection of hacks on top of the underlying AI technology, as opposed to a graceful, full-fledged, beautiful scientific solution.
SE: What you’re all hinting at is that AI is not going to take over everything. Instead, it’s more of a set of tools that are useful for almost everybody. Is that right?
Busch: Yes, and AI done right is almost like CGI (computer-generated imagery) in the movies. You almost don’t know it’s there. AI needs to be transparent. It needs to be something that assists you in doing your task without you really knowing it’s there. I have one car from 2014 that does nothing, and one car that’s modern that does a lot of things. In that short period of time there are all these features that help me drive. And in the future, it should be able to do things like detect when I’m impaired or tired. But it should all happen seamlessly.
Teig: There are driver management systems coming up now that detect drowsiness, drunkenness, and inattention. We’ll probably see this more in fleets as opposed to consumer applications, because of the legal exposure of trucking companies. Driver management systems will catch on there first because of the presumed resistance of consumers to that sort of oversight.
SE: How far along is the AI technology that can determine when components or systems are failing in a car?
Tate: We’re not directly involved in that, but in discussions with customers we’ve heard about chips sensing noises, whether it’s equipment, motors — elevator motors, cargo loaders, or washing machines.
Kheterpal: For predictive maintenance, it’s platform companies or AI software companies. But it’s been challenging to actually ship predictive maintenance technologies because of the time it takes to fail and the need to prove ROI. So you put in the AI and wait for it to fail. Ramp to revenue for deployments has not been fast.
Busch: We’ve all brought our cars in for service, and the service manager gets in the car, drives around, listens to it, and hopefully identifies the problem. There’s no reason why you can’t do that today with AI. It takes some work, but AI is all about data. It’s about collecting the data, training it, testing it, and doing it again. The technology is there. There is enough processing, there is the ability to collect the data and clean it, and to do the QA process. It just takes work — and not a lot of work.
Teig: For the first 70 years that we built software, you wanted it to transform X’s into Y’s, and you’d have to figure out some clever algorithm to do that. And then, given the answer, you’d make Y’s out of them. The principal difference between that and today’s AI is you can take a whole bunch of X’s and Y’s and the algorithm can figure that out by itself. But its applicability is for software, so the programming is different. But if you have enough X’s and Y’s, and you trust the algorithm and there is enough representative data, then the technology we have can generate this. If you don’t, you have a bigger problem than generating the predictive model. Where are you going to get data on how funny sounds correlate with what’s wrong with your data?
SE: But ideally you want to identify the problem before it makes those sounds, right?
Teig: Yes, and there are consumer applications that can tell you that a pot of water on your stove is going to boil over in two minutes. That’s a pretty useful thing because it allows you to do something else.
SE: Will AI chips in a car learn by themselves, or will all that learning happen somewhere else?
Busch: The learning is going to the edge, and probably sooner than we expected. Learning is another word for training, or basically re-training your model based on experience. We’re going to see that in all types of edge AI devices. But it’s probably going to be minor changes. You don’t want these things to go crazy on you. And then, as we get bolder, maybe the algorithm changes 80% instead of 20%. This could be things like assistance. If I’m self-parking, in some cars it learns how close I want to be to the wall, and eventually it changes where it parks by itself. Or maybe you want a car to respond in a certain way. We’ll see a lot of customization. There’s no reason why a processor that can do ADAS can’t train the model. The amount of processing we’re talking about is a lot. Do you need all that? Probably not. But it is available and you can train things on the fly. We already train models on our phones.
Kheterpal: That’s a good point. There are patterns in other applications where we already do that kind of fine-tuning. You fine-tune them for specific tasks and use cases, and often not on devices that are in hyperscaler data centers. That will happen for automotive applications. It may be when the car is plugged in, doing some fine-tuning. So there are those modalities — how you do it, how you augment it, what data you use. On the flip side, in the real world, we’re still not seeing enough of that. It’s very early days, happening in research labs. But there are use cases and demonstrations of that ability.
Teig: From a technical point of view this is possible. But there are two different kinds of learning, one of which makes sense at the end and one which is questionable. The one that makes sense is my phone learning my face. The algorithm for how to transform pixels into faces is untouched, and the database of which faces are supposed to unlock my phone is customized. But if you have the technical capability of having the object at the edge learn, how is that going to be supported? If a car company has 130,000 different programs in 130,000 different cars, they’re going to have an issue supporting that. There are legal issues and support issues. So while this sounds like a nice idea for a car to learn how far I want to be from a wall, that’s very different from learning how to park.
Tate: To that same point, safety is the overriding concern in cars. It doesn’t help you if your car has all these features if you don’t arrive at your destination in one piece. So automotive will be the last place where people experiment with things where you’re changing stuff on the fly. Every time engineers improve something they make it better, but sometimes mistakes are made along the way. A lot of things have to happen when your car hits a bump. With self-learning inferencing chips, there are only so many places where that will make sense. And only when it is totally bulletproof will it go into the safety portion of the car.
Related
AI’s Impact In Automobiles Remains Uncertain (part 2 of above panel discussion)
Software updates, user experience, and reliability all factor into how successful AI will be in cars.
Safety, Security, And Reliability Of AI In Autos (part 3 of above roundtable)
Will people trust vehicles enough to take their hands off the steering wheel?
Leave a Reply