Automakers shifting to HPC chips for improved performance and lower system cost.
Automotive architectures are evolving quickly from domain-based to zonal, leveraging the same kind of high-performance computing now found in data centers to make split-second decisions on the road.
This is the third major shift in automotive architectures in the past five years, and it’s one that centralizes processing using 7nm and 5nm technology, specialized accelerators, high-speed memory, and highly targeted software architectures. For decades, most of the electronics on a car were encased in electronic control units, segmented by function such as braking and infotainment. As more safety features were added and centralized, they were organized by distinct software stacks and automotive OSes based on different domains communicating with each other through a centralized gateway, which is what most new vehicles use today.
But as more autonomy is added into vehicles, the latency of a centralized gateway is proving unworkable. Tighter interdependence, scalability, and flexibility are all required, which a zonal architecture allows, and OEMs are at varying stages of adopting this approach. Strikingly, the automotive zonal architectures look a lot like scaled-down HPC data centers.
“Conventional HPC solutions are typically designed with little concern for power consumption, other than its effect on floor space requirements,” observed David Fritz, senior director for autonomous and ADAS at Siemens Digital Industries Software. “HPC for automotive began similarly, and required significant additional equipment costs for computing, cooling, and power. For years this was assumed to be a necessity and drove market predictions of exorbitant costs, as well as follow-on assumptions that only long-lived, public vehicles could justify the high price point.”
In recent years, technical advancements have proven these assumptions incorrect, and a better understanding of actual requirements for automotive high-performance computing has significantly reduced overall system costs. Fritz noted this has enabled auto manufacturers to re-architect next-generation platforms from the ground up, enabling them to be scaled and leveraged across multiple value tiers. That, in turn, has breathed new life into the vision of more intelligent, safe, and reliable autonomous vehicles for the masses.
Zonal architectures are key to autonomous driving, and a big part of this involves sensors. “Can you process the data at the sensor, or in the next step, pre-process in the zonal controller?” asked Robert Schweiger, director of automotive solutions at Cadence. “Or can you transmit raw data to the central compute unit, and to raw sensor data fusion? This is the vision of the OEMs, because that provides all kinds of information arriving at this super computer at the same time. Then the system can make the smartest decision, whereas if you do pre-decisions, it’s like a decision tree. Certain assumptions are preset, which might lead to a non-optimal decision. There could be a smarter solution at the end, having everything processed at the same time in one place.”
Fig. 1: Zonal architecture concept. Source: Cadence
But that processing also needs to happen in real-time, particularly for safety-related decisions, which is why there is so much emphasis on high-performance compute capability.
“When we’re talking about HPC in automotive we refer to that next evolution of the automotive architecture, the zonal gateway-based architecture,” said Ron DiGiuseppe, automotive IP segment manager at Synopsys. “It’s a little bit different in that instead of independent application domains, there are various sensors, whether it be a sensor in a power train, a sensor for an ADAS system, or an image sensor in the infotainment domain. The sensors are still there. They’re independent, but they go through the automotive network/zonal network. The zonal architecture is composed of multiple zonal gateways, which are essentially network switches. With all of these applications in a centralized processing module, instead of independent application-based domains, they all can be done using a more centralized processing module.”
Fig. 2: Progression toward zonal architectures in vehicles. Source: Synopsys
For ADAS, devices must be deemed functionally safe, which means they usually are designed specifically for automotive usage in those central compute chips. “With the whole automotive industry moving toward electric vehicles, where the power storage is currently still the limitation, ideally such a chip needs to have as little power consumption as possible,” Schweiger said. “So the efficiency — the performance per watt — is super important. It’s very different if you have hardware sitting in the cloud in a data center. It’s no problem to use water-cooled systems and whatnot, but for a car in production usage, power consumption is the top priority.”
Cost control
The trouble is that to achieve Level 3 autonomous driving and above, a substantial set of sensors is required, with the powerful computing platform sitting in the middle. This adds to the price of a vehicle. “If you want to go beyond premium cars, where you can hide those costs more easily, you need to see how you can bring down the cost of such a system. Therefore, it would not be a water-cooled system, and it will be a system that consumes, let’s say, less than 100 watts. That’s a challenge,” Schweiger said.
Price is a very big consideration in the automotive market. “They’re battling against every fraction of a cent,” said Peter Greenhalgh, vice president of technology and fellow at Arm. “Inevitably what will end up happening is they will solder down all the chips onto a PCB, but you can’t just pick one up and drop another one in because that socket would cost them an extra 10 cents, and they can’t justify that.”
Designing a powerful centralized compute system in vehicles is cost-effective, because a design can be developed once and leveraged across multiple vehicle models. “Fundamentally, the microarchitecture of the CPU or the GPU or the machine learning processor doesn’t change massively,” Greenhalgh said. “You put the sprinkles on top of the functional safety if it’s in automotive, or larger physical addressing if it goes into infrastructure markets. And obviously there are some parts inside the micro-architecture that have to cope with larger workloads or smaller workloads. But fundamentally, the micro-architecture of these blocks doesn’t suddenly change.”
So while these designs are expensive, the development costs can be amortized across many vehicles. “The general semiconductor industry is focused on 16/14nm and moving to 7nm, but when you go to the centralized-processing HPC architecture in automotive, that includes a lot of heavy processing and multiple applications running simultaneously in each of the applications,” said Synopsys’ DiGiuseppe. “Now we’re talking about a 5nm-class device with higher performance and multi-core.”
Still, this technology is well-established in other applications. “The compute needs for any HPC application are similar — i.e., large quantities of data and fine grain pattern analysis with complex system modeling — and automotive is not different just because it’s on wheels,” said Kevin McDermott, vice president of marketing at Imperas. “The real-time and mixed criticality aspects are significant. In any multi-array processor configuration, the interaction of the processing elements requires software optimizations that are dependent on the programmer’s tools to debug and analyze complex multicore event sequences. Virtual platforms offer insights and controllability of event scenarios that are not possible with hardware prototypes, which allow extensive verification under failure mode conditions that are important for safety-critical and security requirements.”
What’s different
Automotive does add some new twists for HPC design, though. Data center applications have additional security layers with physical walls and security guards. Automotive, in contrast shares more similarities with consumer devices. Perpetual connectivity, including over-the-air updates, widens the attack surface for hackers, which requires continuous testing of system robustness. A virtual platform for security testing allows unique access to explore unexpected events situations to stress test the inner defenses, options that are hard to test with physical prototypes, he noted.
“Mixed criticality is a common multi-core design challenge in embedded applications,” said McDermott. “As more functionality is combined on a shared hardware resource, the potential for failure in separate areas could compromise high-priority and/or safety features. The automotive supply chain is evolving with increased contributions from IP and software providers, plus new service providers and IT infrastructure. In a multi-layered approach, each stage may add additional software as value-add. This leads to a complex software maintenance and update process. Virtual platforms can be used as digital twins for automotive HPC, to allow the unification of the supply chain with a reference platform for multidimensional continuous integration continuous deployment, so the range of hardware features and different software versions and updates can be used as a virtual regression farm.”
This requires some powerful processing capability, though, and the ability to run different operations simultaneously using a multi-core architecture. It’s not uncommon to see multiple 12 64-bit multicore independent processor cores in a single SoC.
“For an ADAS domain controller, like a centralized processor in a zonal architecture, it’s a multi-chip implementation,” DiGiuseppe said. “There could be a high-performance 5nm host processor, but even that likely doesn’t handle all the performance, so there could be a co-processor, and potentially an integrated AI accelerator. It’s also very common to have an additional external chip with a deep learning AI accelerator, so even now we see multiple chips implementing ADAS-class domain controllers, and this will start to look a lot more like a more traditional data center HPC-based design, with centralization of applications, multiple applications running in parallel, a virtualized operating system, hypervisors, and distributed heterogeneous architectures, which looks a lot like a server in a data center.”
This is music to the ears of accelerator chip companies, which are just beginning to target the automotive market. “We’re seeing a lot of interest in using RISC-V in AI applications in general,” said Zdenek Prikryl, CTO of Codasip. “That includes the European Processor Initiative, and for automotive in particular, there are a number of options that involve functional safety.”
Accelerators can be used to speed up the processing of different types of data, such as streaming video, vibration, and temperature, and which can improve overall system performance and reaction time.
Processing, network, storage differences
HPC for automotive is very similar HPC in the data center — another area with unique requirements for processing, networking, and storage, DiGiuseppe said. “In a data center, you typically break up the processing, the network, and the storage. In automotive, they have the same capabilities — the central processing, the networking, and the zonal gateway, which includes networking and on-chip storage. The major difference in automotive is that it is considered more like an endpoint device versus a data center. A data center might have 10,000 servers, so everything is much more scaled up. It’s the same type of design, but not quite the same performance. The data center would have 200/400/800 Gbps Ethernet on the networking side, with terabytes of hard drives and SSDs for very low latency storage, whereas the automotive network is maybe 10 Gbps instead of 200 or 400 Gbps. Automotive may be trending toward 25 Gbps. It’s the same challenge, but at a different performance point.”
It’s a similar story on the processor side. “An Intel Xeon server in a data center might have a 12-core Intel processor with four processors on a motherboard. What’s happened in the data center, similar to automotive, is there are heterogeneous multi-core architectures with maybe 1,000 servers and the applications are all virtualized. It’s the same challenge in automotive just a bit scaled down. There are not thousands of processor cores. You have the same performance challenges, but not at the same scale,” he said.
Software architectures
Just as in a data center, software architectures need to be well defined. Virtualized environments in automotive need to be architected perfectly because with the heterogeneous architecture, virtualized applications must be spread across multiple cores and/or multiple SoCs/ECUs.
Chris Clark, senior manager in Synopsys’ automotive group, explained that in a data center there is high redundancy to ensure the integrity of the workload, while in automotive that redundancy is needed for safety reasons. “Ultimately you still run into the same challenges of high speed interconnects to ensure that the different domains and compute platforms you are servicing have the necessary compute capability, but also address safety, security, and latency in some cases. They’re very similar, but there’s a tradeoff, too. When we have limited resources, we have to ensure that. In a data center, there may be 1,000 servers. Two or three drop out every day. You don’t have that flexibility within the vehicle, especially when you look at something like a centralized compute platform. So concepts and ideas around hypervisors and virtualization are critical to include.”
For the design team, sensitivity to HPC technology requirements needs to be higher for automotive than for the data center.
“If you look at an HPC data center, you can walk around and make technology switches,” Clark said. “Maybe the core network infrastructure doesn’t necessarily have the bandwidth anticipated for the number of cores, but you can physically switch that out. From an automotive perspective, all of that is scaled down to a single SoC that sits inside of a square foot. You have to deal with all of the cooling requirements and all of the reliability components like vibration. Is your SoC going to delaminate because of heat? All these things that you wouldn’t think about in a traditional data center now have to be considered, so there’s a lot more sensitivity as to what components, what that infrastructure looks like, and understanding the requirements coming from that OEM to ensure that we can meet those requirements both from a computational standpoint and power consumption standpoint.”
Reliability in automotive HPC has a much different connotation than in a commercial data center, too. “No failures are allowed in automotive, whereas in data centers you’ve got to expect it, and there is automatic failover to another server if that happens,” DiGiuseppe said. “In automotive, sensitivity is reflected in the design approach for long-term operation, high reliability, functional safety, and zero-defect design. That’s all about design technique.”
The same is true for software. In a traditional data center, bare metal hypervisors are architected into the system. While extremely lightweight, these hypervisors control all access and understanding of the underlying hardware infrastructure, and enforce rules for the operating environments that sit on top of that hypervisor. Other virtualization scenarios include architectures that include a core operating system, or even a full-fledged operating system, but then it splits up into small computational domains for different virtualized environments on top of that platform. These approaches are utilized in HPC data centers in various configurations depending on what they are trying to achieve.
Virtualization and hypervisors have different considerations in automotive. “You have to ensure that your SoC design supports both those hypervisor and virtualization environments, and that it supports them in a safe and secure way,” Clark said. “In the vehicle, if there are two servers and one crashes, if one of those servers is actually a virtualized environment, and it’s the brake controller, the amount of time that it would take to restart that virtualization environment may have specific requirements that you wouldn’t see in an HPC environment. The ability to restart those services, and restart them in an appropriate manner, is very critical to the concepts around functional safety that really don’t exist in a traditional data center HPC environment. Here, it says in certain fail conditions, the driver must be notified, or the capability of the vehicle must be reduced to ensure that the vehicle operates in a known safe way. For hypervisors and virtualization in an automotive architecture, some decisions have to be made as to what types of functionality in the vehicle, in that domain architecture, are suitable for being included in a hypervisor or virtualized environment. And that’s typically not a discussion or even really a core focus that happens in an HPC data center environment because you just have a much broader opportunity to make changes and make your data center a little bit more flexible for your needs, depending on what you’re trying to do.”
When these automotive SoCs are designed, there’s a very specific envelope of operational requirements, and how the hypervisor is going to operate, and the expected activity the hypervisor might see. If virtualization is included, it needs to specify which components are virtualized.
One area where it would make sense to include virtualization is in an infotainment center, Clark noted. “If I have new functionality that comes in, I’m using containerization and virtualization, and I want to perform an update. That update might be much easier over the air to swap out a container or virtualized environment, versus doing a software update over the air. It’s really going to depend. This is one of those discussions that’s taking place right now with a number of OEMs. New standards are coming out for how to handle software updates and what is the impact to the zonal architecture.”
These kinds of decisions are essential for vehicle chip architectures. “The limiting factor for addressing computational needs is throughput, which in the data center comes from compute performance in the server and workload optimization for specific tasks,” said Tom Wong, director of marketing for design IP at Cadence. “In automotive, the process node of choice today is 16nm. However, from a compute standpoint, you’ve run out of gas to try to do a lot of these AI optimizations. 16nm silicon simply is not fast enough, and the amount of compute resources in terms of implementation silicon would be so great that you’re going to reach wafer costs that are too high. The die size of 16nm may be too large, and the power consumption profile is not suitable for automotive. These ingredients are driving the move to finer geometries, and this is happening as we speak. But we probably won’t see 7nm or 5nm automotive silicon until probably 2022 or 2023, depending on the process.”
High-performance memory considerations come into play, as well. Those are still being worked out. And for reliability, the mechanical assembly is expected to last 12 years without failing. Moreover, all of this has to work flawlessly as carmakers add increasing autonomy into vehicles.
“Increasingly, it looks like AI inference will be on the edge, it will be within the car. This adds a huge amount of AI computation for sensor analysis, resulting in pathfinding and path prediction, and those computations will be done in the car,” Wong said.
Considering that 38% of the overall AI market is automotive, and that the most demanding AI applications are within the car, Schweiger noted that it will be interesting to see how the AI pieces are sliced and diced. “Hardware accelerators that can be scaled up are needed in a multi-core configuration to go up to the higher-performing AI operations, but still use one unit for the smaller pieces. This means scalability for AI is super-important. This is also interesting because as people learn how to leverage AI, it will only grow in importance in the future.”
Related
Shifting Auto Architectures
What changes when cars are designed around the movement of data.
New Design Approaches For Automotive
OEMs steer toward executable specs using model-based systems engineering.
Semiconductor Engineering’s Automotive Knowledge Center
Top stories, videos, blogs, white papers and technical reports on Automotive
The Good And Bad Of Auto IC Updates
Complex electronics and longer lifetimes will require much tighter supply chain management.
Car Industry Changing Under The Hood
Auto electronics are becoming more centralized, connected, and complex, and the entire supply chain is realigning around those shifts.
Big Changes Ahead For Connected Vehicles
Tapping into multiple services, both inside and outside a car, requires a rethinking of everything from architectures to security.
What is the plan for security as these connected vehicles roam about communicating (maybe rather promiscuously) with everything around them? Seems there should be some machine-to-machine authentication or blockchain-like standard built-in to the vehicles and endpoints.
Hi Scott, there will be a combination of approaches used, is my guess, but security is definitely top of mind. It’s hard to say what the plan is since there aren’t any industry-wide standards adopted yet, which will definitely help. So much is still a work in progress. I’m trying to stay on top of developments!