Virtualization In The Car

How and why abstraction layers are becoming essential in automotive design.

popularity

As the automotive industry grapples with complexity due to electrification and increasing autonomy of vehicles, consolidation of ECUs within vehicles, more stringent safety and security requirements, automotive ecosystem players are looking to virtualization concepts in a number of ways to realize the vehicles of tomorrow.

One way is with hardware virtualization; the ability of a device such as a GPU to host one or more virtual machines that each behave like independent machines with their own operating system, and all run on the same underlying device hardware. This means a single GPU can support multiple concurrently running operating systems, each of which submit workloads to the single graphics hardware device.

Historically, one of the trends that has shaped this landscape is isolation. Completely separate modules in the car are separated physically from each other, and in various configurations.

“We’ve seen variations of this,” said Kristof Beets, senior director of product management at Imagination Technologies. “Some people build an SoC, but they actually put in two graphics cores, one for the dashboard and the other to drive another screen. And these are physically separated. The problem is that if you have performance differences, and if one of those GPUs is smaller than the other and it’s not powerful enough, you can’t just use the other GPU to speed it up. That’s a big problem that people have had. Also, putting too many modules down and over-engineering them is expensive.”

A number of activities happening in the automotive industry are further shaping the automotive landscape, said Dean Drako, CEO of Drako Motors. “First you’ve got electrification coming, and that requires different software, different features, different capabilities. None of the car makers really has that under control. Second is the evolution towards ADAS and autonomous vehicles. None of the OEMs has any clue how they’re going to do that in production. It’s all in the R&D phase. They’re working to figure it out, but they haven’t worked out what OS they’re going to run this on. They’re just trying to figure out how to make it work. Additionally, the automakers have a huge problem coming up because the cost of compute in the car is continuing to rise as a percentage of the cost of the car.”

In 1950, electronics comprised just 1% of the car’s cost. Today, about 40% of the car’s cost is electronics, and that will continue to rise as self-driving capabilities and safety features are added, and as software is custom-developed for these systems. But companies also are beginning to question whether everything needs to be developed independently, particularly in areas where there is little differentiation.

“Since the OEM doesn’t have 1,000 engineers on staff to make their own, test it, and deal with compliance, safety and security, if someone shows up with an OS, and they can use it as a business model to save them money, they’re going to be all over it — especially if it solves the self-driving problem and the ADAS problem and the security problem, and all of the other electrification problems,” said Drako. “Tesla’s kicking their butt in terms of capabilities of the connected car, where they can use their iPhone to turn on the car, check the car, look at the cameras in the car. With all of this, the other automakers are caught flat-footed. They have no idea how to even begin to do this because they’ve got 100 computers in the car and each one does one thing. There’s a computer that does the camera system so that you can see the camera when you’re backing up. That’s nice, but that computer doesn’t talk to anything else. When they installed the beautiful over-the-air cell modem for OnStar so that you could get help over the cell phone if you crash, there’s no way to hook the camera up to that cell phone so that you can watch the video over it because it’s two different computers. OEMs are between a rock and a hard place. There is an industry dynamic that says we need another OS in this industry, because Tesla’s not going to make their own OS available to the OEMs.”

Hardware virtualization is meant to address these issues. “Say you have one GPU, but it has multiple client operating systems — basically multiple protected workloads that can grab some percentage of the GPU,” Beets said. “We put that fully in hardware to have minimal overhead, because we wanted to keep as much of that 100% of the GPU that we have, so that you can very finely distribute it over those different workloads. It’s similar to what people do on CPUs these days. It’s a kind of a time slicing-based system. You basically take the GPU, and you use rules, which is all software-based, to schedule the different workloads and protect them from each other.”


Fig. 1: Automotive virtualization model. Source: Imagination Technologies

Virtualization is especially crucial in automotive right now, given that the automotive industry is rethinking the future of the automotive architecture along the lines of a data center on wheels, noted Frank Schirrmeister, senior group director, solutions marketing at Cadence. “Virtualization is very important, and especially in automotive where you are facing the situation with zonal architectures and what to put where. You really need to be careful to separate the critical aspects from the less-critical aspects, like the audio and video. Some of those may be able to crash, but not the one for cameras that are crucial to autonomous driving and those types of applications.”

Hardware virtualization has been widely deployed since the beginning of the millennium across data centers, primarily to improve utilization of servers because it costs money to power and cool server racks. Through the use of hypervisors, multiple tasks, operating systems and applications can share the same hardware.

“It’s a way to create multiple virtual instantiations of the same hardware, and every instance is virtually dedicated to a specific product or software or application,” said Stefano Lorenzini, functional safety manager at Arteris IP. “The hypervisor is a bare-metal operating system that runs directly on the hardware and creates an intermediate layer with respect to other application or software programs that are running on top. So if you want to look to the architecture from the top to the bottom, you see the application, then you see the hypervisor, and then you see the hardware layer. The hypervisor is the thing that creates this illusion to the application that every resource of the SoC is dedicated to them.”

This also solves a problem in autonomous vehicles, where the trend is to have many different distributed processors in the car, but not every one of those is dedicated for a specific function. In many cases, this is seen as a way of avoiding redundancy, which adds weight and cost. But that approach also limits the capabilities of failover systems, which are required for autonomous vehicles.

“Every provider was going to provide the operating system, the application for that specific processor,” Lorenzini said. “You might have tens and tens of different processors. With the level of complexity of the system increasing a lot, the trend now is to try concentrating all this computational power that you need in a single and centralized computer. However, if you are going to put them together, you at least want to re-use an investment from a past application, operating system, etc. But you’re going to put all of this stuff in, expecting it to work separately from each other on the same piece of hardware. This is where the problem comes from for the automotive OEM, because every application might have a different safety requirement and ASIL level. You might have, for example, a braking system that must be ASIL D, another application that must be ASIL B, another application that has no ASIL rating because it is not safety-critical. The moment that you put all these applications together, you must assure a separation or isolation among these different software tasks. And this is exactly what the virtualization can do, because it can create the illusion to assign tasks. Virtualization separates every task so that if one specific task fails due to a fault in the software, for example, all the other tasks will be not affected.”

Types of hardware virtualization
As work in this space continues to evolve, engineering teams have two ways to implement hardware virtualization — para-virtualization and full hardware virtualization.

Para-virtualization works like a big software switch, where there may be one GPU and one piece of software that controls that GPU. On the user side, there may be a big software switch that says there are two applications, the dashboard and the infotainment system, and that software is what allows for the switching between the two.

“The problem with this approach,” Beets contends, “is that you don’t really have real virtualization because you’re basically doing everything in software. Another problem is that typically there’s only one piece of driver software that controls that GPU, so if one of these apps misbehaves, it can crash that piece of software. There’s a lot more risk in a system like that in terms of actual working. There’s also a lot more overhead because it is software and it is doing that manual switching. Usually what happens is you would run a frame of your trusted application, then you would soft reset the hardware to clean it up to make sure that it’s not being polluted. Then you run the other application. These kinds of resets take quite a lot of time on the hardware, but you have to do them because otherwise data from the previous application could impact the next one.”

Full hardware virtualization builds everything into hardware. There are multiple software interfaces in the hardware design, so many completely independent driver stacks can be run. Everything believes it has its own GPU and that it’s actually talking to the hardware.

Some GPU providers, like Imagination, use a tiny firmware processor inside GPUs to manage those priorities, as well as to serve as a watchdog. If something misbehaves can determine what is taking so long or why it is doing some strange things. It also can reject workloads. In addition, a software module uses priority schemes to isolate workloads on specific subparts of the GPU, providing flexibility as to how the user can subdivide the GPU to meet requirements, Beets noted.

Software
Virtualization has been proven to be an effective way to compartmentalize different software stacks and reduce the overall hardware cost. Nevertheless, issues still need to be resolved when it comes to safety and security, particularly in automotive.

“Processor cores for automotive applications evolve slowly,” said Shaun Giebel, director of product management at OneSpin Solutions. “To support virtualization, additional hardware functions are needed. Combined with more software layers, this makes the overall verification and functional safety compliance even more complex. Formal verification of certain low-level software functions is already in use in specific safety spaces. Adding the formal verification of hardware is the only way to be highly confident that the system performs as intended, and that it is free from interferences and critical performance bottlenecks.”

The problem becomes more difficult without standardized solutions, such as an automotive-specific operating system. It’s the job of the OS to handle a number of the unique safety, security and complexity requirements in this space, but it’s much harder to accomplish this with competing proprietary OSes.

“Why don’t cars have their own operating system?” asks Drako. “You’ve got Android and iOS for phones, games and laptops, and servers have their own OSes. Cars are the only other high-volume consumer device in the world that doesn’t have its own OS.”

Beets agreed, and suggests this has to do with certification and functional safety. “Something like Linux and Android are tested to a certain amount. But they are still open-source, and lots of people contribute. They’re very complex, as well. They’re very big, with lots of lines of code, so you can’t quite guarantee that they’re bug-free. They’re kind of the best effort. In automotive, where the dashboard is very critical for the user, it doesn’t require all those rich features because it’s basically just running a single application. So you can get away with a much smaller operating system that is much simpler, but which also can be verified by a third party to say it is written properly, it meets the requirements, and there are tools that do that verification work for you, as well. But if your code base is too big, that’s impossible.”

Among the best known automotive-focused operating systems are INTEGRITY and QNX. There are also automotive-grade versions of Linux, which are simplified versions of Linux. All of those can run in a virtualized system.

“You can create all these separated, domain sandboxes, and each sandbox can be running its own OS,” Beets said. “Some of those are functionally safe operating systems like Integrity. Others could be just standard Android or Linux, and that’s fine. And if they crash, they basically just stop submitting work, so the GPU doesn’t get more rendering commands from them, which is fine because the dashboard is running its own little OS on another subset of the resources and will just continue.”

Security and virtual models
Security is another area where virtualization can play an important role.

“Without security. there is no safety, and without safety, there is no security,” Cadence’s Schirrmeister said. “Both sides go hand-in-hand because if I don’t have security then somebody might intrude and disable my brakes, and that’s not good.”

On the flip side of the term ‘virtualization’ is the digital twin concept, in which the entire system is virtualized out of models. There are security features that can be addressed from this perspective, as well, further left in the design flow.

“If an attacker got a hold of the system, what would they do to break in? Virtualization allows you to look at that perspective, because if you are farther to the right, you get too detached because it’s too focused,” said Jason Oberg, CEO of Tortuga Logic. “If you’re in a semiconductor company building a subsystem, it’s really hard to understand how your attacker would break into that subsystem because you’re so far down the path. But from a virtualization standpoint, say it’s an ADAS system and you’re trying to detect whether you’re going to run into an object. You can really think about if someone accesses this part of my system. Maybe it’s some external input to the system like a debug port, or maybe it could be just some way of accessing it from another domain, like in your OS, from user code or another way. But if it gets in and actually invokes something, it can affect the behavior of that ADAS system. You can model that whole behavior, and you can do that at the whole-system level, from the hardware all the way up through the OS and into the application level, which you couldn’t do if you were farther right.”

Virtualization helps on this side, as well. “It’s really around thinking about how this is going to get deployed in this environment, and if there’s an adversary trying to do something malicious, how would they try to break into the system?” Oberg said. “Then it’s about unfolding and unraveling everything from there. With virtualization, you really can have that complete picture. As you get farther down to the real thing, it’s things get more siloed and isolated. It gets harder to reason about that.”

Others agree. “Virtualization enables companies to take full advantage of the performance of processors, optimize architectures, and address the increasing software complexity,” said Marc Serughetti, senior director of the Verification Group at Synopsys. “It also requires on-boarding of new tools to accelerate software development, integration and test. Dependency on hardware availability creates delays, uncertainties and limits productivity. Virtualizing the hardware for development purposes using virtual prototypes that range from virtual hardware simulation to host-based execution is a key technology to start development early, deploy more productive debug and test, scale development in server farm and enable this development from anywhere at any time across collaborating teams.”

More work ahead
The magnitude of the challenges in the evolution of the automobile can be illustrated in many ways, one of which is the overlay of ISO 26262 requirements on the V Model of automotive development. At every point in the development cycle, even development steps that don’t end up in the vehicle must be properly accounted for and traceable and testable, and virtualization has a role to play here, as well.

“Especially for safety-critical systems, virtualization touches every part of the system engineering process, but the tests are still disconnected,” said Lance Brooks, principal engineer in the Integrated Electrical Solutions group at Mentor, a Siemens Business. “The testing that’s done for safety and things like this in the design stages are so disconnected from the later parts of the process because of the hardware dominance in the design cycle. Especially in automotive, they are so hardware-centric. It’s all about the metal.”

As a result, automotive OEMs are struggling to hone their software expertise. “One of the things that they are literally struggling with is the hardware-centric mindset,” Brooks said. “Virtualization, digitalization, and digital twins can really help them, as it’s physically impossible to verify everything on real hardware. The tests are disconnected from design to verification, and this abstraction with the use of digital twins and virtualization is a way to help them break that barrier. If they embrace that throughout the process, they’re going to start to break down these silos, with the testing on that design side and the other side.”

And that could greatly speed up and improve the development process of more autonomous vehicles.

Related
5 Major Shifts In Automotive
How new technology developments will change the trajectory of the automotive industry.
ADAS & Automotive Knowledge Centers
Top stories, special reports, videos, white papers & blogs related to automotive
Auto Power Becoming Much More Complex
Electrical systems are being re-architected from the ground up.
Who Owns A Car’s Chip Architecture
Carmakers and their suppliers compete for dominance, creating challenges across the electronics industry.
Understanding SLAM (Simultaneous Localization And Mapping)
How computer vision systems track movement.



1 comments

Pete Johnston says:

Thank you for this write up. What are some challenges and disadvantages, or alternative solutions to hypervisor s/w?

Leave a Reply


(Note: This name will be displayed publicly)