Making the right architectural tradeoffs while still keeping costs under control is becoming more difficult.
Rising complexity in automobiles is creating huge challenges about how to add more safety and comfort features and electronics into vehicles without reducing the overall range they can travel or pricing them so high that only the rich can afford them.
While the current focus is on modeling hardware and software to understand interactions between systems, this remains a huge challenge. It requires all the tools available today, as well as an understanding of how these vehicles will evolve over time and interact with their surroundings and each other. And all of this needs to be done within conventional parameters such as reliability, safety, and cost.
Cost is of particular concern for automakers, and there are efforts underway to rein in the cost of parts and systems as more electronics are added into vehicles in order to make new features affordable to the majority of car buyers. But how to reduce those costs enough to bring them in line with what consumers will pay is daunting, particularly when it comes to the electronics. It requires reducing or eliminating redundancy, and the integration of complex chips and software to improve efficiency.
“The automotive industry is always looking to consolidate functions primarily to save,” according to Vasanth Waran, senior director of business development for automotive at Synopsys. “Automotive is a cost-driven industry. The volume numbers are huge, so anytime you’re able to consolidate and save even $1 of savings over the life of a car it would amount to anywhere between $10 million to $25 million in savings for the automaker, based on the volumes they run. Consolidation brings in cost savings, which is especially important the more electronics that get added to a car, made more challenging by wiring costs that have gone up significantly.”
Carmakers — particularly established companies that are in the midst of transitioning from mechanical to electrical features and functions — are wrestling with what to keep, what to replace, and when to make changes.
“There is a lot of technology that was developed in the late ’80s or early ’90s that is still prevalent today and drives a lot of car electronics,” Waran said. “However, we’re in a new paradigm. If you look at where the revolution is, the last century was mainly on the mechanical side. Now you’re starting to see it on the electronics side, and more specifically on the semiconductor side. With that complexity, there’s got to be a change in architectures. Consolidation means more compute in one area, which means the way systems are architected needs to change. Coupled with electrification, it has completely changed the way cars are designed. Right now, cars are basically a skateboard with a digital chassis that’s built on top. And that has changed vehicle design dramatically because it’s removed the burden of the past. You almost have a clean slate to go in and design a car, and I’m just referring to the electronics. It’s opened up new possibilities. When you couple autonomous driving on top of that, which brings new sensors and new technologies, it’s completely changed the way things operate.”
It also makes it more challenging to ensure cars will be safe throughout their projected lifetime because many of the technologies in vehicles are new, and these vehicles will be interacting with other vehicles on the road that will be both newer and older. Keeping them safe requires every tool in toolkit, whether that is a digital twin, more prototyping, simulation, and emulation — and that’s even before they get to manufacturing.
“Will there be one to rule them all? For these models, it’s an inverse Lord of the Rings situation,” said Frank Schirrmeister, senior group director for solutions and ecosystem at Cadence. “There cannot be only one. You will need many, and one of the things you really want to make sure of is that they don’t contradict each other. This is a challenge at several levels. But will people build this? It depends on the level of abstraction and the scope. Will somebody have an MBSE model of the system of the car? Potentially, but it will be at such a high level of abstraction that you really can only make a general analysis. It comes down to what type of use model you’re driving and what type of answer you want from that digital twin. Any RTL representation of the design is essentially a digital twin at that portion of the life of the product, and design teams will build it if it makes sense. In the case of software development, for instance, adding an emulation model to the existing model that you use for implementation takes extra effort, so you need to figure out if that effort is worth it by getting a head start on software development or other verification tasks? The answer is, ‘It depends.’”
There are a lot of similar answers in automotive. Even the term “digital twin” is open to interpretation.
“It sounds like it’s saying you can build one model, your digital twin, which can do everything,” said Tim Kogel, principal engineer for virtual prototyping at Synopsys. “No, it cannot. All the questions for the architecture, for the software, at runtime serve as a kind of predictor of what the real design does, and I don’t think there’s a one-size-fits-all digital twin. We build specific models that can answer specific questions or help with specific design tasks. The focus must be all on the architecture side, where very early in the process you built non-functional models to analyze non-functional properties like power and performance, but which have big impact on your design decisions. Then you have a whole other breed of virtual prototypes for software use cases, which are then later in the process.”
Kogel calls this a “perfect storm” involving new architectures, zonal controllers, and between 10 million and 100 million lines of code. Any one of these areas is significant, but all of them together are a much more complex problem to solve.
In the past, this was much easier with electronic control units. They were not overly complex, and they could be adapted to new requirements relatively easily.
“ECUs had their special requirements in terms of functional safety,” Kogel said. “As soon as safety relevant functions were needed, they provided special hardware to satisfy the high ASIL B and D functional safety requirements. One example of this is dual lockstep hardware implementation, which is two processes basically doing the same thing in lockstep, then checking each other that nothing unexpected happened. Now, with more advanced applications, along with the move toward autonomous driving, this is becoming more interesting than just duplicating. People are actually duplicating things at the functional level. Mobileye calls this ‘true redundancy.’ At the same time, there have been reports where deep neural networks actually fail to be fail-safe. In the case Tesla Autopilot, crashes have happened because the drivers don’t take all the disclaimers into consideration that FSD (full self-driving) is not fail-safe. Most OEMs today recognize that in order to achieve fail-safe autonomous driving, it’s about not just duplicating hardware, but really implementing algorithms in two completely different ways, with different architectures to achieve the required level of safety.”
Building platforms
Automotive chips also are required to be highly flexible. “When you look at the MCU architecture families from the likes of Infineon, NXP and Renesas, they address a large variety of functions with a large variety of architectures, but which are all derived from the same base architecture,” Kogel said. “A lot of architecture analysis work went into making sure they can support all kinds of different use cases.”
In effect, these are the new platforms. By adding this kind of flexibility into architectures, designs can be modified for power, performance, and cost.
“We need to figure out how to incrementally build these things because OEMs don’t do complete redesigns,” said Cadence’s Schirrmeister. “On the whole, where each OEM stands is very much in flux. That’s why people are building digital twins, which are essentially digital representations of what’s under development to understand the effects both short and longer term. When a digital twin is supplemented by thermal data, it can help to inform the reliability of a device or system.”
Power adds another level of complexity, and one that is extremely important for BEVs because it affects range. With an internal combustion engine, that may require a fill-up, but functions like air-conditioning and heating are basically free. That’s not true in vehicles powered by a battery, and 100 watts of power for heating or various high-performance chips can reduce range by 10 to 30 kilometers, Kogel said.
This is particularly with increasing autonomy, which is basically supercomputing on wheels. “There are multiple cameras, and multiple other types of sensors,” he said. “There are complex algorithms, like classification and segmentation, and it requires pathfinding for the most complex processing tasks. When these are just put into generic, general-purpose GPU types of architectures, it burns too much power. We’ve all seen the images of autonomous vehicle prototypes in which the whole trunk has racks of compute that require kilowatts of power, which is in total conflict to this requirement. To get that to an acceptable level, you need to move from general-purpose architectures to much more dedicated architectures like vector DSPs, application-specific instruction set processes, and for some pieces, even hardwired logic, because only that gives you the one or two orders of magnitude power reduction that you need.”
Connecting the vehicle
The efficiency of data movement, processing, and storage inside a vehicle also affects power, and the internal network plays a key role here.
“This is going to be super critical,” said Paul Graykowski, technical marketing manager at Arteris IP. “There will be many different systems connected together inside of a car, and those are going to be connected with multiple networks. Everything’s going to be chained together. There’s no telling where we’re going to end up with all of this, but it’s going to be very high-performance. It’s going to have to react quickly. This is not a simple problem to solve. We’re putting more and more devices into a vehicle. It’s not just driving from here to there. I also want my stereo, my TV, air-conditioned seats, auto driving, and I want a sunroof. There are so many little things. The user interface itself is going to be very complicated, and we need to make sure we have a good flow with that. There are going to be power needs everywhere, and we’re going to have to address those power needs. It might be simple, where we can just say, ‘This device is going to be fine but we need a very complex network for this other one.’ We’ve got to have the timing parameters set forward, as well. There is power coming into the vehicle, and it has to be distributed across all the various components of that vehicle in order to tie it to, say, an SoC.”
In addition, vehicles increasingly need to be connected to the outside world. Sumit Vishwakarma, principal product manager for the A/MS business unit at Siemens Digital Industries Software, noted one of the enablers and drivers for new automotive technologies is 5G. “Until 2022, the primary semiconductor revenue driver was smartphones,” he said. “But if you look at growth estimates now, automotive is catching up pretty fast in this space. That’s also being driven by all the innovation happening in AI, which is moving into the automotive space.”
Important to note here is how much data is being captured in automotive, “One estimate says that approximately 750 megabytes of data per second is being captured by a car by a variety of sensors. Vishwakarma said. “And as we know, data is the gold mine today. Tesla is one of the most advanced vehicles in the market right now. This is not just because it’s an electric car, but because of the intelligence, the neural networks which are already trained in the chips. The way a Tesla operates is that it gets smarter by the day. When you drive a Tesla, the sensors are capturing the data. From whatever the car is capturing through the sensors, at the same time, it’s also capturing the action the driver is taking.”
This used to be called ‘shadow mode.’ “Let’s say when autonomous driving is off, the Tesla turns on the shadow mode,” he said. “It is trying to understand what the human is doing. Let’s say it’s capturing all the information that the car sensors are capturing at the same time. It is also capturing the human response to the surroundings, then sending that information back to the cloud when the car is powered. Then, when everything transfers to Tesla, their engineers are trying to see whether their models were predicting the right behavior by comparing what the human did and what the model would do. If it is the same thing, that means the model is getting better. But if there was a difference, they may have ended up in an accident. They then try to adjust the weights of the neural network. This is how they keep training. By implementing the shadow mode, Tesla is getting smarter everyday. That’s how the next generation of automotive is going to happen. It’s all about the data. And that data is the not just for ADAS. It’s also key for infotainment, in-vehicle experience, and multimedia features. Everything is connected and transmitted through either LTE or 5G.
Who’s driving?
One of the big changes underway in the automotive world is that OEMs are getting more involved in defining and designing chip architectures. If they are large enough, they can build their own teams. If not, they can collaborate with semiconductor companies and design service providers to customize designs for their specific requirements.
While the ecosystem generally understands how technologies are evolving, it’s still hard to pinpoint exactly how things will play out within the OEMs. “The OEMs are all headed in the same direction, but when it gets down into the nitty gritty, it’s all different,” Graykowski said. “What’s compatible, and what’s not? It’s hard to say where we’re going to end up ultimately, but when you start thinking about power and performance, a network on chip is uniquely qualified to go after these types of things. In the NoC, the key thing is going after latencies that impact performance, and making sure everything’s talking together. There might be certain SoCs with sensors that do local processing put on side of them. What we can do is make something that was just a sensor to something much more complex, with memory on it. It’s going to have compute power on it, it might have AI engines on it, all of these different things. This makes it more complicated.”
He noted that at least one NoC is required on a sensor for all of the local processing. “It can actually say, ‘I know this is a stop sign.’ And instead of sending the raw image data across, now it may send that data to another processor somewhere else in the vehicle that says, ‘We’ve got to stop accelerating.’ Are there cars in front of it? Does the vehicle need to stop? All of that ties together. The model we have right now has upward of 20-plus complex SoCs in a car, and that number is just continuing to grow. We’ve seen the number of complex SoCs quickly growing to 26 or more.”
Conclusion
So how do you change the architecture of a car while keeping the power profile the same? What about new use cases? And what about more dependence on software? What do you do if you wake up in your car doesn’t boot? What’s the best way to account for performance? And how do you solve all of these issues and still keep the cost within the budget of the majority of car buyers?
The answer to these questions will require a combination of technologies and methodologies, some of which are still being developed. But much more work needs to be done before drivers no longer have to worry about whether a vehicle will behave predictably and responsibly, and whether they will be able to reach their destination without searching for an available battery charger.
Related
Automotive Bandwidth Issues Grow As Data Skyrockets
Increasing autonomy and features require much more data to be processed more quickly.
Which Fuel Will Drive Next-Generation Autos?
So far there is little agreement on the best alternative to gasoline engines, but semiconductor technology is required for all of them.
Software-Defined Cars
This approach will streamline development and simplify upgrades, but it also increases design complexity.
Leave a Reply