Will Hypervisors Protect Us?

They may not be a silver bullet, but they are a good first step when it comes to securing cars and the Internet of Things. Problems start when people believe the job is complete.

popularity

Another day, another car hacked and another report of a data breach. The lack of security built into electronic systems has made them a playground for the criminal world, and the industry must start becoming more responsive by adding increasingly sophisticated layers of protection. In this, the first of a two-part series, Semiconductor Engineering examines how hypervisors are entering the embedded world.

, CEO of Imperas, frames the reason for consideration of hypervisors based on changes in hardware design. “In the past, if I wanted to have separate tasks running, I would probably design it so that I would have one on the left, one on the right, each running on different processor subsystems and the two would never touch. I would pipeline data from one to the other. They were inherently separated except for the information that they shared. The move to modern hardware, where you have multi-core processors or a farm of machines, means that everything is connected. And yet, you still want to be sure that they do not touch each other – that the jobs don’t infringe upon each other.”

It is the role of the hypervisor to achieve exactly that separation. Its main function is to create and manage virtual machines where the software believes it is running on its own dedicated machine. It is completely unaware of other software that may be running in another virtual machine, even though both are running on the same hardware.

Virtualization has become a staple in the data center and provides many advantages, such as CPU consolidation, fault tolerance and job isolation. But deeply embedded systems are not as regular as server farms and the priorities are different. Embedded systems tend to be heterogeneous and contain different memory architectures. In addition they contain multiple types of processing engines, including CPUs, GPUs and possibly FPGAs.

Hypervisors have seen adoption where the need is the most critical. “The usage of hypervisors is a trend but not a revolution,” says Vicent Brocal, general manager for FentISS. “We have been working with aircraft manufacturers and hypervisors are a key technology for them. The technology has gone through a natural evolution. It is an enabling technology. It provides an opportunity to different sectors in the industry, and most recently in automotive where they are looking to see how it could be applied to their specific needs.”

All of the sectors looking at hypervisors share one thing in common. “The interest in hypervisors has been caused by the explosion in connect devices,” says Cesare Garlati, chief security strategist for the Prpl Foundation. “This makes the security issue mainstream.”

It is security that is changing the game. “The hypervisor market was primarily for factory automation or automotive markets,” says Shoi Egawa, president and CEO of Seltech Corp. “A few years ago we started to see interest in (IoT) because they needed security. This is a change because factory and automotive are not about security. They are functional safety-based. Control systems are often implemented using a real time operating system (RTOS), but then they want to run graphics rich content on top of Linux or Android. Factory automation was similar, where there is real time control and either Windows or Linux on top of that. In automotive, they want to separate the infotainment from the control systems. The hypervisor can do that.”

But there are other important changes. “Most embedded systems are connected,” points out Majid Bemanian, director of segment marketing for Imagination Technologies. “The majority also have third-party applications running on them as well. With this kind of complexity, most of the players are concerned about how to protect themselves from all sorts of challenges. A TV set can now have multiple different streams. You have 4K content, you can make purchases on-screen, you can check your front door security monitor, play games – all of them may happen at the same time.”

Those concerns are amplified in other markets. “The media display could be a large TV in the home or the back seat of your car,” Bemanian says. “They all now have this notion of diverse and often orthogonal capabilities. The challenge is that they have different requirements from the standpoint of certification or compliance and having to put them all under the same roof is becoming a challenge. They need multiple environments that can be isolated and can be enforced.”

And while there are differences, there are also a lot of similarities. “Many-core systems in cars, for example, may look like different animals but in reality they turn out to be very similar,” says Colin Walls, embedded software technologist in Mentor Graphics’ Embedded Software Division. “The priorities are different for various reasons but you have the same need for security, for user interfaces, communications between modules etc.”

Mixed OSes
A common characteristic of embedded systems that run hypervisors is the combination of a real time function and the need to run a legacy stack of software that is available within a specific operating environment. This has to be done is a safe and productive manner. “Many times, the critical components are real time and have strict timing constraints,” says FentISS’ Brocal. “In the hypervisor, we have a fixed allocation of resources so we can guarantee that the application has the appropriate allocation of CPU processing and other less critical functions that may be running within a Linux environment.”

Imagination’s Bemanian paints a similar picture. “Running in the GPU you may have the dashboard and infotainment all using the same subsystem. The problem is that you need to ensure a 60mS tick time for the dashboard. You have to respond to the speedometer when going from 60 to 80 miles an hour, or make sure that the speed is reflected properly. That is a safety issue. After that you can go back to navigation or entertainment.”

But the notions of real time are changing. “In the past we talked about hard real time and soft real time, and there are applications where that is still the case, but commonly we now have so much CPU power available that we are more focused on providing enough power to get the job done rather than the fine detail about how the time is allocated,” points out Mentor’s Walls. “Now we provide products that enable you to keep control of how your time is used, but in reality what were once considered real-time applications are reliably implemented using products such as Linux, which does not intrinsically have any real-time characteristics. But the system is so darn fast it doesn’t really matter.”

It is also important that there is almost no impact on silicon area or power. “There is a balance between the two and it is hard to say how much,” says Bemanian. “If you take a CPU that does not provide a lot of support for the hypervisor, then you will see an overhead around 10% to 15%, but that will drop to less than 1% to 2% with hardware support, depending on workload. In terms of silicon impact, it is noise level. We are talking about a hundred thousand gates in millions of gates.”

Hardware support
None of this can happen without some hardware support. “If hardware support is not provided, the overhead of a hypervisor becomes quite large and in general it just doesn’t make sense,” says Egawa. “Hardware virtualization, trust zone, or several other ideas that are coming up, each accelerate hypervisor performance. We only use 1% of the CPU performance. The target is not only the big CPUs but the IoT market and that requires the usage of microcontrollers. These have very limited memory, so we have to make the hypervisor small and compact.”

Not all agree with this sentiment. “People often ask about the overheads of an RTOS, and this is a bit like asking the price of a Rolls Royce,” says Walls. “If you need to ask, you can’t afford it. If the overhead is a big issue then you are probably sailing so close to the wind that you should look at a system redesign. A modern RTOS is low overhead and a hypervisor is a pretty low overhead – not zero, but you would not use it in a system that does not have at least a sensible amount of capacity to take up the slack.”

All systems require some basic services. “For the hypervisor to work, we need a few capabilities from the hardware, including memory protection and some timers to be able to enforce temporal isolation,” says Brocal. “We also need mechanisms to protect certain processor instructions that can only be used by the hypervisor.”

But beyond that things can get tricky. “What you separate is the core with its registers and stack, and these switch each time you switch context,” says Garlati. “The same applies to the MMU. You switch to different memory management and there is no way for one process to cause damage to or bridge the division in terms of memory or processor. But the tough things are I/O, including DMA and dealing with heterogeneous environments. That is why you need something specific at the hardware level.”

Bemanian takes this a little further. “We do not slice the CPU, create a trusted environment, and then say I am secure by definition. The challenge with that becomes very complex in implementation and only considers the CPU centric isolation into the rest of the SoC. If there is a GPU, then let’s make sure it can support hardware virtualization and a hypervisor—the fabric, the connectivity devices, etc. You have to take a holistic approach, and as a result we sliced it based on the flows. So you have a DRM flow, a payment flow, a gaming flow. A flow can touch every element of a system, the decoder, encode, compression engine, CPU, GPU. All of these elements are being touched and concurrently with multiple streams. So just securing the CPU is not enough.”

Egawa adds a new area that needs hardware support, as well. “Machine learning is becoming an area that is interested in hypervisors. We thus see the need for constructing notions of a secure domain for systems that contain artificial intelligence components.”

When a system becomes multi-core, it adds a layer of complication. “When there is only one core, we can control everything through the system bus and therefore we can control how each process is using the memory,” says Brocal. “But when there are many cores using the same memory and the same bus, there is interference between the cores. Multiple cores may try and access the bus at the same time and thus we get bus contention. That impacts isolation in the temporal domain. It would be useful if the hardware could provide additional functions to limit or dynamically adjust the bandwidth of certain CPUs. This should be under the control of the hypervisor based on criticality.”

When considering safety and security, you have to look at all possible ways that a system could get compromised. “What happens if an application is attacking the DDR?” asks Bemanian. “It could launch a denial of service attack against the DDR. It is not attacking the critical function directly, but attacking the resources that it needs. You can get to the point where it violates safety requirements. So we have to look for quality of service policing in the GPU today. Then we can ensure that things do not go beyond what is expected.”

The demands on embedded hypervisors are evolving. Not only do we demand isolation, but guaranteed robustness and security. At the same time the hardware is getting more diverse and complex.

In part two of this article, we examine the impact of hypervisors on certification, security, debug and test and the issues surrounding the choice of open source or proprietary hypervisors.



2 comments

Derek Corcoran says:

This is a very interesting concept and something that automotive really needs to work on to convince their customers that the vehicles will be safe either with driver assistance or more so for driverless cars. I would think that the costs of having a more capable processor or even multiple systems to deal with failures would be a small cost to add to the overall costs. The safety and security should be paramount (this is one of the key advertised advantages – fewer accidents and deaths) otherwise what is the benefit.

Hellmut Kohlsdorf says:

I am just starting to deal with hypervisors and focusing on type 1 ones. What I have the impression of that there is no source to “understand hypervisors” and “virtualization”, why and how it plays! This 2 part article is the first I have found. Thx.

Leave a Reply


(Note: This name will be displayed publicly)