Almost everything is a tradeoff and tipping the scales is usually influenced by the end product goals. Hypervisors have a few such parameters.
Hypervisors are seeing an increased level of adoption, but do they help or hinder the development and verification process? The answer may depend on your perspective.
In the hardware world, system-level integration is rapidly becoming a roadblock in the development process. While each of the pieces may be known to work separately, as soon as they are put together, the interactions between them can create a number of problems. The industry is working to come up with some tools and methodologies that constrain this problem.
In the software world, they are taking a different approach. They are using a hypervisor to create well-defined interfaces between the individual software blocks, ensuring that one cannot disturb another. This enables applications to be built that are more robust, provide a significant increase in security, allow for staged development and enables the controlled intermixing of attributes of a real time environment, with a more flexible operating system environment such as Linux.
But this problem is multi-faceted, and what helps in one area can cause problems for another. Balancing all of them may depend on what you are attempting to create and the value you place on certain attributes of the development process. “Safety critical applications are becoming increasingly competitive, and there is always a push for more functionality in these systems,” says Vicent Brocal, general manager for Fentiss. “We need a way to handle the complexity. We do that through abstraction. By using the hypervisor you can isolate things. That provides a form of abstraction, so now the components are isolated and interact through well-defined communications.”
Brocal explains this is just a natural evolution of the processes that have been in use for a long time in some industries. “It enables separated development, where each application can be developed independently. This is how things are developed within avionics systems. In the past, it was different computers performing different functions. The components were networked, so each component was developed by different companies using well-defined interfaces. Now we have the capability to do that for the software, and this is functionality enabled by hypervisors.”
The biggest draw for adoption of hypervisors is security. “There is a big game going on between the baddies and the goodies,” says Simon Davidmann, chief executive officer for Imperas. “The hypervisor is trying to provide good solutions to allow software to be run independently. The hardware allows it to be done quite efficiently. Consider the automotive industry. They have added all of the infotainment systems, but they didn’t put them in separate environments, in different virtual machines. We saw that in the Jeep last year where someone could get into the infotainment systems and then drive the thing off the road. If they had been using a hypervisor, then you would not have been able to do that.”
Davidmann compares this example with another—better design. “The Tesla has been hacked, as well. It takes about four minutes to hack the Jeep, but this guy had a Tesla for three months and attempted to hack into it. He had to modify the hardware to monitor things, and in the end he did get in. But the most dangerous things he found he could do was to honk the horn. They had done a good job of separating the different pieces.”
The problem with security is you do not always know what the attack vector will be. “Hypervisors enforce isolation and that is all it does,” says Majid Bemanian, director of segment marketing for Imagination Technologies. “Now if there is a physical attack, it is a different type of attack vector. What if I need to enforce trust? For that you need a root of trust so that it is anchored to the silicon. When the platform is anchored, using a key or physically connected to that device, the anchor allows you to authenticate. That is important so that the device is brought up in a known state as the manufacturer expected. I can guarantee that the code that is running is what was intended. Now I can trust the platform. After power up, can someone get into the platform and compromise it? This could be a physical attack or a cyber-attack or network attack. You have to decide what measures you want to take to keep it in a stable known state.”
“Security is not about stopping things, it is about risk management,” says Cesare Garlati, chief security strategist for the prpl foundation. “Risk management means that you add as many layers of security as you feel comfortable with. Nothing guarantees you 100% security. It just doesn’t exist. This is the result of human brains at work. We make mistakes, some of them deliberately, and vulnerabilities will always be found.”
Impact on debug and test
Separation can create problems for debug, and traditional solutions for that can create vulnerabilities if not managed carefully. “Consider the challenge of debugging such a system,” says Davidmann. “The way that you debug deeply embedded systems is that you add bits into the hardware such as JTAG. This is where things start getting unstuck. You really don’t want to put stuff into the hardware that allows you to play with the hypervisor. Now, anyone who can access the physical device can break in through the JTAG and they have control of your system.”
Garlati agrees. “JTAG is a security threat, but this is really an SoC story. The JTAG controller is one of many controllers in an SoC. The same is true for secure boot or root of trust. You need to add the necessary authentication at the hypervisor level. The JTAG controller must be the one that authenticates the request and that will enable you to debug one of the secure domains and not others.”
However, Garlati believes this is only a problem for some product types. “If you have a vendor that provides the whole thing, they would debug at the hypervisor level itself and so there is no need for anything else. You have complete control of the system. When there are multi-tenant situations, one vendor may provide the hardware platform and other vendors may provide their own services and applications. You do not want one of these vendors to be able to gain access through JTAG to things that don’t involve their VM.”
So how do you provide secure debug? “The JTAG port can stop a processor and provide access to the memory,” says Colin Walls, embedded software technologist at Mentor Graphics’ Embedded Software Division. “The hardware can actually protect you from intrusion via the JTAG port so that it can control access. But JTAG-style debugging is fairly primitive when it comes to multi-core because it tends to be stop-start debugging, which doesn’t work well when you have multiple cores working in harmony. Using a debugging agent that runs along with the application code tends to be more useful and realistic way to perform debug.”
Are there better solutions? “In the perfect world we would be able to do non-intrusive debug of a multi-core system, but nobody has figured out a way to do that in a comprehensive way,” adds Walls. “If we could create Instruction Set Simulators that ran at full speed, then that would be the answer. But by definition, the cores we want to simulate are not much slower than the most powerful computers we have available. So we are shooting at a moving target and will never quite get there.”
However, there are some in the industry that believe this is possible. “More people are looking toward simulators because adding the hypervisor adds another layer of indirection,” says Davidmann. “In a simulator you can see everything. If you make it easy to debug the hypervisor, there is a fear that you will have made it easy to break in.”
This is clearly something that the system development team has to consider. “Do you allow your platform to have access to physical connectivity?” asks Imagination’s Bemanian. “You have to identify how you want to access the device. We do have mechanisms where there is a trusted element, a root of trust, which can control the JTAG as a gatekeeper. It may allow access or disallow access. So you can shut it down and prohibit access unless it is in production. If you do allow someone to gain access and the signal are exposed, then you could explicitly say that certain areas of memory are prohibited. So enforcement mechanisms through the debug port can be done as well.”
Open source or proprietary?
One question that comes up a lot is whether hypervisors should be open source, or if proprietary solutions are more secure. “The usual arguments for open source are all there,” says Walls. “People think that open source is more economical but the need for solid technical support with a product involving security is very apparent. With the best will in the world, any open source product will never quite have the same level of support that handing over money tends to elicit. There are plenty of companies that have set themselves up to support open-source products, as well, and thus have a commercial interest. But making it more solid in terms of security makes more sense.”
Shoi Egawa, president and CEO of Seltech makes the case for proprietary hypervisors. “An open source hypervisor does not have clear maintenance. There are a variety of CPU vendors and SoC providers. The SoC vendors want to concentrate on the hardware because they are not software companies. Performance is often key and open source performance so far is not very good. If you have an application with very critical access latency and someone wants to modify the source code, who is going to do that? The hypervisor is not like an OS or a driver. It is very difficult to do well. So who is going to help you with those kinds of changes?”
The open-source hypervisor model is similar to that of Linux. “There is a community that generates the critical mass for anything to succeed,” says prpl Foundation’s Garlati. “Then you add specific individual vendors that add commercial versions of it, with their own ideas, that may target specific verticals or achieve specific certifications such as required in automotive. They support that version. So open source is important for the basic building blocks and security is an example of that. Companies do not want to compete on these things. They want to compete on their value-add, and this has nothing to do with the hypervisor.”
Perhaps the key is that an SoC is a multi-core system, probably with an asymmetric architecture. Each core may be running a different operating system or may have no operating system and a hypervisor can help manage the complexity of resource allocation. But if you put all of your eggs into one basket, then once it is broken you have no protection.
To view part one of this report, click here.