Smaller nodes and chip consolidation drive automotive players to rethink automotive architectures.
As chips bound for the automotive world move to small process nodes, including 5nm and below, the automotive ecosystem is wrestling with both scaling issues and challenges related to architecting safety-critical systems using fewer chips.
This may sound counterintuitive, because one of the main reasons automotive chip providers are moving to smaller nodes is to reduce the number of chips in the car. For example, they may go from dozens and dozens of chips doing different functions to just a small handful or maybe even one chip. But that shift needs to be balanced against fail-over requirements in case something goes wrong and a laundry list of other safety requirements.
“It accentuates the normal PPA challenges that they might expect,” said Stewart Williams, senior technical marketing manager at Synopsys. “And in order to satisfy the automotive requirements, they also have to balance that against the ISO 26262 requirements.”
The chips themselves aren’t simple, either, often involving various processing elements, as memories, complex power management schemes, I/Os and memories.
“There are accelerators, GPUs, CPUs, and large amounts of memory to augment the CPUs to handle these large, complex operations that need to happen. Many things happening in parallel as well,” Williams said.
These changes are impacting the automotive ecosystem from both business and technology perspectives.
The current state of the industry can be thought of as a parfait, suggested Kurt Shuler, vice president of marketing at Arteris IP. “The semiconductor vendors are the yogurt at the bottom. Above that is the fruit, which are the Tier 1s. And then at the top is the whipped cream and sprinkles. Those are the OEMs. The OEMs are at the top because that’s what everybody sees on the top of this parfait, but there are many other layers. What’s happening now is that because nobody wants to be the Foxconn of cars — they don’t want to be the company that just stamps the metal but doesn’t own the architecture — the parfait is getting stirred around.”
Technology-wise, there are two key trends — electrification and autonomy. Semiconductor vendors are trying to do more system-level work, while EDA companies are starting to integrate some of their tools and IP, so they all work together, Shuler said. “For the Tier 1s this means, just like the hyperscalar companies like the Googles, the Facebooks, the Amazons, and the Microsofts, they are now designing their own chips. That means they’re competing below and they’re competing above. ‘Mr. OEM, we can take care of all of this for you. You just make the plastics. You don’t need to know how all this stuff works.’ And the OEMs are now saying, ‘Hey, wait a minute, this is our brand, this is our car. We need to start hiring chip people too.’ Everybody is, within the car itself, clashing from a business and technical standpoint,” Shuler said.
There are other potential conflicts and challenges to go along with this, such as what to do when data comes into a car, where and how that data should be processed, and who ultimately owns the data. Today, there is no agreed upon solution to these issues.
“Different companies are at different stages in the automotive sector,” said Megha Daga, director of product management for AI inferencing at the edge in the Tensilica group at Cadence. “One of the biggest issues I see is at the very core level, what bit precision is acceptable. We have heard from several companies that say they still need to go with floating point in certain applications, because they’re not comfortable. However, there is a huge sector that has evolved and understands, from a reality perspective, that power and efficiency and energy are critical, so we have to go with an embedded platform. But this is happening slowly. While we highly advocate 8-bit, some developers want to go with 16-bit processing because they understand it. It’s like a midway point for them and they will get all the benefits from a power-efficiency perspective.”
At the same time, from a throughput perspective, they will be able to achieve much better results with an inference chip compared to a floating point device, she said. “The market has to evolve to where they can understand what applications can fit into what kind of precision markets. If I’m doing a pedestrian detection application, I certainly have a very critical accuracy target. At the same time, if I’m doing some kind of an infotainment kind of platform, then I have different accuracy targets.”
There are no standardized approaches for how to do this. “Each one has been evolving,” according to Daga. “In fact, we can say that even in the same region, such as in the European region, different OEMs, based on their research, are comfortable with different approaches from a system design perspective. Latency is one thing, bandwidth is another. Then, how much memory are you okay to give because it’s all about the data movement and the impact on the power. There is a huge industry that’s still very much all about LPDDRs. And there are some who have pushed themselves to HBM, so it’s an evolving sector, and it’s going to be a long time for some kind of standardization to come through.”
That doesn’t mean the industry is standing still, though. “Processing data coming into the automobile is changing in a predictable way,” said David Fritz, senior autonomous vehicle SoC leader at Mentor, a Siemens Business. “Early on, it was about capturing many terabytes of data, trying to normalize that and process that data externally while trying to find some way to label the data so that you can use it in a training exercise. Companies like Microsoft Azure, AWS, Cisco and others are wrapping around that whole methodology. The issue with that is the training can only happen based on data somebody else collected somewhere, and there’s no good way to know, no metric of saying how that actually represents the real world.”
The big change is that a lot more of the central processing in the car is shifting from using raw data to working off objects that sensors indicate are there. “Think of it as vehicle edge computing,” said Fritz. “It’s happening a lot. We first saw this approximately 18 to 24 months ago, but the ability to get away from FPGAs and to get away from GPUs means that now, with an off-the-shelf Arm processor, you can do a lot of processing and it is cost effective. The cameras can do a lot of pre-processing, the LiDAR 3D Point Clouds can do some processing, and it’s a part of the LiDAR package now, and it’s a part of the radar package. Instead of saying, ‘Here’s the raw data, you figure it out,’ it’s now translated into objects and vectors. Things like that that are much easier to work with and have lower compute complexity for central computing.”
It wasn’t that long ago that discussions were centered around how much processing would happen in the cloud.
“At CES this year, I had more than one discussion saying how silly that idea was because the latency of communicating with the brakes and steering is already incredibly tight,” Fritz said. “And then you want to throw that into the cloud? There are all kinds of reasons why that wouldn’t work. There are a lot of people out there who are making statements like that. Maybe they don’t have the engineering background behind it. But it sends people off into dead ends that hurt a lot more than they help.”
Questions around data
On top of that, the issue of data ownership is still unresolved.
“Think about how much data your cell phone creates, as well as all of the security breaches that have happened,” said Arteris IP’s Shuler. “The car has a whole bunch of information just like that cell phone, and there’s a fight over who owns that info. You’re creating this AI-enabled, omniscient set of data that the OEMs want to own, the Bosches want to own, the Mobileyes wants to own, the chip vendors want to own. The Googles and the Facebooks say if you give them this equation, they can provide all these great services. The legal system, and the governmental systems don’t know how to deal with this. So it’s not just a fight over the architecture of this stuff internally so I don’t get basically branded as a metal bender. It’s also creating all this data, and raising questions about how to make money off of it. And that’s where the real value in all this stuff is — in that data.”
Who is, and who should be in charge of determining who owns the data?
“That’s the hard thing. You look at a company like Mobileye, they have an interesting business model. With their systems they own or have that data, but they can license it back to the OEM or Tier 1 that uses it. For the consumer who’s driving the car, I don’t know what control they have over that information,” he said. “There’s also a fight over encryption stuff. There’s a fight over the availability stuff.”
Encryption is interesting because that becomes a philosophical question. Everyone wants to monetize this car information. Governments want it for planning to prevent accidents and improve traffic flow. Businesses want it to maximize visibility for marketing purposes.
Autonomous downshift
And behind all of this, even though the manic sprint for full autonomy has chilled for now to a more practical focus on driver-assist features, automotive ecosystem players are still grappling with an evolving approach to Level 5 autonomy.
“The complexity of autonomous driving is huge,” said Geoff Tate, CEO of Flex Logix. “Our company is in Mountain View, Calif., so we see the Google Waymo cars going around all the time. They’ve been driving around in circles here since we started the company and they still don’t have an autonomous driving solution. But they have hundreds of cars. So if they haven’t figured it out yet, I don’t know how everybody who’s just starting is going to figure it out. The complexity is enormous.”
That’s a good reason to slow the development of autonomous vehicles, from a practical point of view, he said, “Suppose I told you I’ve come up with an autonomous vehicle, and it’s totally safe. How do you prove that? What politician is going to say, ‘Okay, fine, go ahead and build millions of them and start selling them?’ And the ones driving around here, there’s always a driver, and that’s not economically feasible. Why buy an autonomous vehicle if you need to have a chauffeur? That’s not going to help you much.”
Further, there are not a lot of companies that can afford to build chips for the car companies because the cycles are so long. “It’s not a space for a startup,” he said. “If you’re going to go to your board and tell them, ‘I’m going to develop something is going to take six years to revenue,’ that’ll be a tough sell. So the players in the space now with commercial off-the-shelf solutions are companies like Intel Mobileye, Nvidia with Xavier, NXP with its big automotive business, including complex microprocessors and microcontrollers. Those are the main players in the space, and I suspect that it’s much more likely that they will integrate increasing amounts of the right stuff for the car companies and be the suppliers for the solutions, rather than ASICs — at least in the next couple of years.”
Related
Automotive Knowledge Center
Special reports, videos, top stories, white papers, and more on Automotive
More Data, More Problems In Automotive
Data is becoming more useful and timely, but not everyone has access to it.
5 Major Shifts In Automotive
How new technology developments will change the trajectory of the automotive industry.
Uses, Limits And Questions For FPGAs In Autos
Where it works, where it doesn’t, and where the choices get fuzzy.
Leave a Reply