Improving Performance And Lowering Power In Automotive

The top-line goals are clear enough, but achieving them will require big changes in design.

popularity

Automotive OEMs are boosting their investments across the semiconductor ecosystem as stepping stones toward electrification and autonomy, and they are starting to encounter some of the same issues chipmakers have been wrestling with at advanced nodes — massive compute performance, thermal and power issues, reliability over extended lifetimes, and a highly diverse and geographically distributed supply chain.

And that’s just for starters. Every component in automotive design needs to be integrated, architected, and ultimately modeled and manufactured in the context of a complex and continually evolving system of systems. Autonomous vehicles have been described as supercomputers or data centers on wheels, but that’s merely a piece of what will be needed to safely drive, and be autonomously driven, in a vehicle in the future.

“If you think of the supercomputer, we have these big rooms where everything looks the same,” said Christian Pacha, senior director of platform architecture for automotive microcontrollers at Infineon Technologies, during a panel discussion at DAC. “What we have here [in automotive] is a very heterogenous supercomputer. We have very different functions. We have different timing criticalities, different safety criticalities, different reliability. And while that may sound easy, at the end it’s quite complicated, and that’s a consequence of divide-and-conquer. If we just talk about the system for electric driving function, the requirements are totally different compared to what it’s doing in the direction of autonomous driving. So while we’re used to hearing these phrases, we have to be more specific. The car will be a smartphone. The car will be a data center on wheels. It’s an aggregation of different techniques.”

This aggregation has an impact on the vehicle system as a whole. The workload may be very different for assisted driving versus autonomous driving. “Every vehicle connects petabytes of data when driving eight hours a day,” said Nima Pour Nejatian, global head of solutions architecture and engineering for automotive at NVIDIA. “Think about it. You’ve got to process these petabytes of data. Can you do it on your laptop? If you call a laptop just a computer, and not a supercomputer, that’s just not possible. On top of that is graphical content that has to be processed, along with other applications like infotainment and processing speech. These many applications demand compute, networking, and storage.”

To this point, NVIDIA last year introduced its Drive Thor SoC, which has capacity of 2 petaflops, plus engines for processing ChatGPT-like workloads for conversations. In comparison, the capacity for a typical high-end laptop is measured in gigaflops.

“When you think about a vehicle 10 or 15 years ago, what it’s capable of doing compared to what our vehicles are capable of doing today is overwhelming, even with just the parallel parking assist,” said Judy Curran, chief technologist, automotive at Ansys. “If you are trying to teach your 16-year-old how to parallel park, that’s a simple thing to do now compared to ADAS Level 3, where you can take some of your attention off the road and the vehicle drives. We’ve come quite a way in ADAS, and we’ve come quite a way as an industry in the whole infotainment system, with regard to the communication you’re able to do while you’re in the vehicle. All of that takes compute power and software capabilities.”

HBM in automotive?
Along with compute power and software is a need for high-speed, low-latency memory, especially for AI/ML in vehicles. Brett Murdock, product line director for memory interface IP solutions at Synopsys, said automotive and HBM are truly meant for one another, despite a long list of issues that still need to be resolved.

“The thermal challenge is one that crops up immediately, as well as the mechanical challenge of ensuring the interposer in an automotive environment can handle the mechanical stresses,” Murdock said. “Both of those things are solvable. And then it also depends on exactly what you’re using the HBM for, what kind of tolerance you might have for errors, and other issues that could come up as a result of the thermals. If you’re using it just for infotainment, no big deal. If it’s part of your engine controller, now you’ve got a different safety level that you need to pay attention to.”

Still, Murdock recognizes that the automotive manufacturers want HBM. “That’s no real secret, since the bandwidth in the automotive environment is simply exploding,” he said. “If we look at the number of cameras, and the amount of image processing that needs to occur in a self-driving vehicle, trying to satisfy that with LPDDR5x, which is the de facto memory used today in the automotive environment, is a very large challenge. The number of channels that are required is tremendous, so HBM is absolutely something that the automotive manufacturers want to use, and it’s just a matter of time until they start actually using it. It may be a case of somebody takes it, does a risk assessment, and sees that they can suffer some of the consequences or can mitigate some of the problems in exchange for the benefit they’ll get out of HBM. It wouldn’t surprise me if by the end of this decade we are riding around in cars that have HBM devices in some of the safety-critical systems.”

However, for that to happen, costs need to come down, and reliability needs to go up. “It’s not only too expensive, it’s also the TSV reliability,” said Frank Ferro, senior director of product marketing at Rambus. “There are companies kicking around the idea of whether they can use HBM because of the big AI systems in the car, for which you need lots of bandwidth. I know there’s work being done in that area, but I don’t see it anytime soon. Right now, GDDR and LPDDR from a bandwidth standpoint hit the right balance of power and performance in the car, and I don’t see that stopping anytime soon. The LPDDR industry did a good job in terms of supporting automotive reliability requirements. Similarly, on the SoC side, the process nodes are also doing automotive. LPDDR5 and 5x, and eventually LPDDR6, are getting fast, and approaching 10 gigabits.”

Ferro noted that GDDR will add another option for automotive, as well, once it addresses the reliability requirements for automotive. “Those are pretty good sweet spots,” he said. “The main areas of memory must be able to deal the AI inference that’s happening. Is it a person or is it a street sign? Then, you’ve got the audio/entertainment systems. Infotainment, is okay with LPDDR or even GDDR, but specifically for ADAS it seems like GDDR speeds are needed. Of course, the automotive industry would like more memory, but they’re okay for now.”

Compute, memory, and software all fit under the umbrella of the vehicle architecture, which is evolving to also include the sensor fusion for autonomous driving. David Fritz, vice president of hybrid-physical and virtual systems, automotive and mil-aero at Siemens Digital Industries Software said there are three primary ways to look at the sensor fusion challenge.

“One approach is to fuse the raw data from multiple sensing sources before processing,” Fritz said. “While this approach can reduce power consumption, bad data from one sensor array can contaminate good data from other sensors, causing poor results. In addition, the transmission of huge amounts of raw data poses other challenges with bandwidth, latency, and system cost. The second approach is considered object fusion, where each sensor processes its data and represents its sensor-specific processing results as an interpretation of what it detects. This has the advantage of seamlessly integrating results from onboard sensors, infrastructure sensors, and those on other vehicles. The challenge of this method is a universal representation and tagging of objects so they can be shared across disparate vehicles and infrastructures.”

The third option, and the one Fritz finds most compelling from the power, bandwidth and cost perspective, is a hybrid of the first two approaches. “In this method, objects are detected by the sensors themselves, but not classified. In this case, point clouds of an object are transmitted to onboard central compute systems that classify (tag) point clouds from different sensors, both internal and external. This significantly reduces bandwidth and latency requirements, keeps the cost and load on the sensors low, and allows the vehicle to interpret or classify the objects in any way it likes, thereby eliminating the need for a universal classification standard.”

Until they settle on one approach, OEMs will likely grow these systems incrementally, he said. “They’ll just keep adding ECUs little bits at a time, and say, ‘We think we can still sell Level 3 for another five years because we want to kick that can down the road.’ You can see the commercials on television, they’re stepping up and saying [Level 3] is an amazing thing, and really it’s the same lane-keeping that we had five years ago, but they think it’s a new whiz bang thing so they’re delaying it. I don’t see a lot of people getting serious about a solution just yet. People ask all the time, ‘When we are going to have autonomous vehicles?’ I tell them whatever they hear, double it. You think it’s five years? It’s at least 10. Don’t wait for it, go buy a car now if you need it, because it’s not going to change that drastically overnight.”

Still, there has been a continuous stream of changes in the ecosystem. “OEMs, including startups, and EV-based OEMs look at vertical integration and where they drive the entire system architecture,” said Vidya Rajagopalan, vice president of engineering for hardware at Rivian. “They don’t look at it as buying components from the different Tier Ones necessarily, but think about a clean sheet of paper and what their architecture looks, which really drives everything. We talk about software-defined vehicles, but that’s not possible if you haven’t really built your architecture ground up to really get a clean-sheet-of-paper architecture.”

Another change is that OEMs are significantly more involved in the entire design process than in the past.

“It’s no longer that they send an RFQ and expect Tier Ones to come up with the right solution,” said Aravind Ramakrishnan, head of partnership and innovation for the West Coast at Vitesco Technologies. “It’s a collaborative approach right from day one, where Tier One OEMs and even chip suppliers work hand-in-hand in developing the right solutions, whether it’s vertically integrated or outsourced products. We have noticed that from the OEM perspective. We’ve also seen OEMs don’t just collaborate with the Tier Ones, but also go all the way to the chip suppliers. OEMs tell us they want us to use this chip supplier versus that one from a supply chain perspective, for resilience on that front. But it’s also for efficiency because, for example, power electronics speak to the better efficiency, improvement in range, improvement in performance. These are all very strategic things one EV OEM wants to put in place versus another.”

In addition, OEMs are requesting more scalability in technology solutions, which translates into additional opportunities. “In certain products that we offer, we have traditional OEMs coming to us saying, for example, ‘We want you to offer three-in-one solutions that are integrated into e-drives,’” Ramakrishnan said. “Other OEMs just want to purchase an inverter and don’t really care about the e-drive. They have a different strategy for it in terms of supply chain, or they in-source it. We need to be flexible to offer a standalone inverter and power modules. Some OEMs say, ‘We don’t want you to give us an inverter. We can take care of that in terms of integration, but we are really interested in the power modules that would go into the inverter.’ So that kind of flexibility in the business model approach has certainly been at the forefront in the last couple of years.”

Others cite similar experiences. “There are two extreme development models,” said Infineon’s Pacha. “The newer one is where everybody talks to everybody and cooperates. However, because we are in transition, we still have in some areas the classical waterfall where I’m talking to a Tier One. The Tier One has a knowledge of the ECU, and then it goes down to the semiconductor, and everybody has a local ecosystem of smaller software/hardware companies. That still exists and it makes it difficult as the transition phases. How you manage this? We have the old world, and there is need to serve that. But then how do we adapt to the newer models? That’s beyond the pure technical challenges.”

One area that will help is the development of models based on industry standards, which can help shift some of the design work left in the flow, in addition to enabling software defined vehicles. From a supplier perspective, there is a need to get as much reuse out of the designs as possible.

“If I can get this model out earlier, share it with OEMs, and then be part of the system development early on, it provides us a chance to get the business, and also get the feedback on how to make this component common across multiple OEMs,” says Ansys’ Curran. “Being in the virtual environment is key to part of that changing.”

Randy Fish, director, product line management, hardware analytics and test at Synopsys, agreed. “As we make the comparison with data centers — which everyone likes to do, referring to a data center in the vehicle — one of the things we all forget is that in data centers, people more or less went to Linux and a common platform software-wise. So with the software-defined vehicles and services-oriented architectures being discussed right now, there’s still not a common software stack out there, even though there are a number of really compelling and interesting committees or standards trying to be driven. But if the OEMs and Tier Ones, or whomever needs to be involved to consolidate, I don’t think anybody really has a vested interest in being the winner with one or the other. And having a common platform across the industry would enable the acceleration of the software development.”

Conclusion
At the end of the day, to truly realize the promise of electrification, software-defined vehicles and autonomy require approaching the entire vehicle development from a higher level of abstraction to handle all the complexity.

Roland Jancke, head of department design methodology at Fraunhofer IIS’ Engineering of Adaptive Systems Division, said this could get too complex to handle. “But you need to focus on the question that you’re raising to a web system or to a model — and then, where to put your focus on which level of detail you need, and which other components you need at a more abstract level, just in order to fit this specific part that you want to look at.”

Models will help, but only if the models are credible enough. “How can we trust these models?” asked Jancke. “This is important for the next level of autonomous driving. They say they need to have 1 million kilometers on the road. But they cannot afford to have them all on the road, so, only 10% are on the road, which means 90% are virtual. How do we trust these models and the results of that investigation if it’s only virtual?”

If the industry can manage to modularize and standardize some of that, then the burden of designing the models can be spread across a lot of different players.

“These interfaces are a way of standardizing connection of different levels together, or different models together,” Jancke added. “This is a way of formalizing and standardizing, which is a good idea in order to make these models exchangeable for different levels. If I have a behavioral model of a motor, and I exchange it against a finite element model, because I want to investigate the mechanics instead of the behaviors, I can exchange at the right time and at the same interface. Standardization is always key to exchanges between different vendors.”



Leave a Reply


(Note: This name will be displayed publicly)