AI’s Impact In Automobiles Remains Uncertain

Software updates, user experience, and reliability all factor into how successful AI will be in cars.

popularity

Experts at the Table: Semiconductor Engineering sat down to talk about software updates in cars, where AI makes sense, and why there’s a growing sense of optimism, with Geoff Tate, CEO of Flex Logix; Veerbhan Kheterpal, CEO of Quadric; Steve Teig, CEO of Perceive; and Kurt Busch, CEO of Syntiant. What follows are excerpts of that conversation, which were held in front of a live audience at DesignCon. To view part one of this discussion, click here. Part three is here.

SE: There is so much software going into new vehicles that we may need the constant updates that we have in our smart phones. What will the impact of that be?

Busch: We’re already there. I was driving in my first electric car and the screen went blank. My first reaction was, ‘What do I do?’ My son told me to turn it off and turn it on again. I hadn’t even considered doing that for a car. Sure enough, I stopped, turned it off and on again, and it started working. So the same things that work with Windows work with automobiles.

Teig: That brings a whole new meaning to ‘blue screen of death.’

Tate: What will happen on the learning side is the carmakers will find ways over time to gather data and aggregate it in a central location, and to use that data to improve reliability. But it will be done with careful controls, where everybody uses the same tools.

SE: We have so many architectures coming into cars, though. We’ve gone from ECUs to domain controllers to zonal architectures, but not everyone is adopting the zonal approach. So now you have highly innovative but inconsistent approaches, and software needs to be written for all of this. What’s the impact?

Kheterpal: This is about religion. I’ve been in meetings where people have fought over architectures and what the automobile needs to look like. Fortunately for AI, that portion of the vehicle is a bit segregated. The AI is a hybrid, generally. It’s the edge within the edge. The chip in the camera in the car is doing some inferencing, for example, and then a central computer is doing more inferencing across multiple sensors. The AI is usually in a state that tries to stay away from fighting that religious war of whatever is your architecture. It fits onto that and becomes a sort of application layer. That’s how we’ve seen different clients getting architected.

Busch: I don’t think this problem is any different than the problems we’ve been facing for the last 30 years. There was a time where we said, ‘Okay, processors need to be single-threaded and we’re going to run them faster.’ And then we said we needed to have multi-core processors. And then we said, ‘Okay, we need quality of service for networking, so let’s have higher bandwidth.’ These kinds of battles between multiple architectures are really good for the industry. It allows people to deploy capital and competitive teams to see what comes out best. So maybe we’ll have 10,000 people working on 10 different architectures, and then 15 years from now we’ll have 20,000 people working on 1 architecture and making it better.

Kheterpal: Automotive is almost like a data center in that the software requirements, the models, that you run today will be completely different from what you might want to run 10 years from now. So the hardware being deployed must be powerful and flexible enough to handle different models and pre- and post-processing tasks and stochastics. Once you ship an architecture that the AI layer and software will run on, you’re done. You’re not changing that for 20 or 25 years.

SE: And there are some huge changes. The early self-driving prototypes had a supercomputer in the trunk. Going forward, though, will we see these kinds of massive changes, or will there be more incremental changes? And will those vary from one location to the next?

Tate: Everyone in this business is to catch up with and surpass Tesla, so there’s a pace setter. They want to at least match the high-end features Tesla has, and Tesla probably will continue to innovate with people continuing to try to follow them and perhaps surpass them. In terms of regional variations, when you sell a product that is going to be driven all around the place under all sorts of different conditions, you don’t have the luxury of developing a car that’s good for New York City and a different car that’s good for Iowa.

Busch: It’s very hard to tell what is going to succeed. But what typically happens is somebody cracks the code at a certain time, where they offer a certain feature set at a certain price point and a certain availability that meets whatever the customer’s requirements are. That succeeds for some period of time. But despite all of the arguments about whether it’s one big central computer or how involved the cloud is in training, it really just boils down to who meets the customer requirements first. If they’re successful, then everyone copies them. You place your bets on who’s going to get there first.

Teig: The architecture that people started with is one that has one central computer that gets all this data from a kajillion sensors and figures it all out in one shot. But the modern car has hundreds of pounds of cabling because of all the data that you’re schlepping from place to place. That’s ridiculous. The biological solution to this kind of problem is to have senses with a tremendous amount of local processing, so the brain isn’t dominated by interconnecting biological systems. So while the religion of the automotive companies is at this place at the moment, the way it should go technically is to have smart sensors all around the car. So they’re digesting the imagery you’re seeing on the left side of the car, and summarizing that data, and doing the same for the right side of the car. Those summaries are then sent to that the thing in the middle. That means you’re more likely to be backward-compatible. The fancy system in the middle might change over time — both its hardware and software — but you’re still going to want much of the same information about what’s going on in and around the car. These edge-of-the-car gadgets will still be doing a tremendous amount of pre-processing, even if the central system is still in a time of great evolution.

Kheterpal: That kind of architecture lends itself well to how the automotive industry designs and ships vehicles with modules that are well-tested, and that are put together to create a system that’s well-tested. They’ve been doing it that way for a long time, and it fits well into this edge-within-the-edge architecture.

SE: One of the differences with AI is that results are measured in probabilities rather than fixed numbers. How do you determine what’s good enough? Is it 80%, or is it 99.999%? And do you care if the object in the road is probably a particular 1 of the 199 breeds of dogs, or do you just care that it’s running in front of your car?

Tate: What you’re trying to do is to make driving safer and better than just leaving it all to the driver. The models for external recognition are constantly getting better. So it’s not about what’s good enough. It’s that having this technology makes the driving experience better and safer than without it. That’s what the carmakers are looking for. They want to get better and better models. Humans aren’t perfect, and the AI models aren’t likely to get perfect. But they are getting better.

Kheterpal: At the end of the day, these models are built by engineers and data scientists. They will have test data from a module, and they can say, ‘This is the outcome.’ And then what generally happens is that after enough testing, based on whatever data that company or group has, one day someone says, ‘That looks good. Ship it.’ You may have minimum cut-offs, where the probabilities are too low and this is unacceptable. But when it starts to meet the bar of all the test data you ever had, beyond which you have no more information today, then you say it’s good enough. Obviously, you have to make sure that if it fails, then something else happens. Maybe the car pulls over to the side and stops, or it uses emergency braking. The fail-safe has to be there. But the model that is shipped is the best of the group’s or company’s ability to verify it.

Teig: The concern about the probabilistic nature of the models is a red herring. Everything physical is probabilistic. Our models in physics and chemistry are probabilistic. So is every measurement. Pure math is pure math, but anything physical is intrinsically probabilistic. Statistics as a discipline enables us to estimate the probabilities of failure as a function of the probabilities of the things that we’re measuring, and so on. At the end of the day, if the legislators or the standards bodies or whoever say the probability that this system will fail is less than 1 in 100,000, for example, we can do some computations and see whether that’s what’s being made. By comparison, old school automotive stuff is probabilistic, too, but it’s not rooted in scientific principles.

Busch: We’re talking about AI in a very personal device. People love their cars. It’s something that surrounds us. We drive it, it’s comfortable. But it’s not an engineering problem. It’s not about whether it needs to be 100% or 90%. The real question is whether the AI is something that people use, and whether it creates a user experience that people enjoy. If not, then people won’t use it. If I’m driving over the speed limit in my-self driving car, and it does something to save me that’s super uncomfortable, I’m probably going to turn that feature off. As an industry, we haven’t really spent a lot of time on this. But going forward, this is going to be the next stage. How do I take the math and then effectively leverage the AI program and turn that into a good user experience? Amazon Alexa is a good example of this. With the first Echo, you had to say ‘Alexa’ a whole bunch of times before it started responding. Then it got better. But there also has to be a human in the feedback loop. It’s not just training with data. It’s training with humans, and the humans have to define the end user experience. If we don’t do that, no one’s going to use it.

SE: Nothing is ever perfect in AI. There are always biases in the data. But when machines start training machines, does that have an additive effect?

Teig: Reliability testing is reliability testing, whether you’re testing mechanical stuff or an AI software program. We always can have standards for how reliable the system needs to be, and we take measurements and do statistics. And just as you want to make sure that the physical car that’s being built is unlikely to crash, because it passes the various safety checks on the assembly line, conceptually it’s not any different with a human-constructed AI versus AI constructing AI. At the end of the day, it’s an engineering product.

Kheterpal: But you can imagine the looks you would get if you try to sell an AI system that is going to train the AI that eventually ships in a vehicle, because the bar is very high in automotive for AI training. Automotive always has been much more conservative about what gets adopted than other industries. They’re going to be more careful.

Related
Where And Why AI Makes Sense In Cars (part 1 of above panel)
True autonomy won’t happen quickly, but there are a lot of useful features that will be adopted in the interim.
Safety, Security, And Reliability Of AI In Autos (part 3 of above panel)
Will people trust vehicles enough to take their hands off the steering wheel?



Leave a Reply


(Note: This name will be displayed publicly)