The Limits Of Virtualization

Virtualization is king in the data center, but it has struggled for a role on mobile devices where more energy-efficient alternatives are available.

popularity

By Ed Sperling
The future of virtualization in the corporate data center is firmly established, but questions about the value of virtualization beyond that world remain as fuzzy as the future of many-core systems.

While there is no theoretical limit to how many cores can be added into SoCs, there is very little progress in developing applications that can take advantage of all of those cores outside of commercial applications. In fact, the current accepted limit for most processors is holding steady at four cores for mobile and personal use—with most processing still occurring on one core—while in the corporate enterprise and for scientific number crunching there is talk of hundreds of cores.

One of the reasons that many-core systems garnered so much attention in the first place is that it was impossible to continue cranking up the clock frequency of processors after 90nm without cooking the chip. They simply run too hot, and the idea was that if chipmakers could just spread out the processing among multiple lower-frequency cores then everything would be okay. But despite decades of attention to this problem, parallelism in software is stalled, leaving virtualization as the only viable option.

So why hasn’t virtualization taken off in mobile devices? A key reason is power. Virtualization works like a sophisticated scheduler, allowing applications to take advantage of any processing resources that are available. But using more cores isn’t necessarily the most efficient way to run applications. The more efficient approach is to actually increase the number of processors rather than cores, each with their own specific job, as long as they don’t have to be cache coherent with each other.

This stands in stark contrast to data centers, where virtualization has been well accepted. But the main reason it has been so popular is that servers were running at between 5% and 20% efficiency before implementing virtualization, and most of those were fully powered on all the time. In battery-power applications, most of the system is powered down most of the time. Keeping more parts on to take advantage of virtualization would significantly reduce battery life.

The one big exception appears to be security, which has always been the poster child for virtualization on multiple cores. Intel was talking about running security in the background with virtualization when it introduced its first dual-core chip in 2006, and the chipmaker subsequently invested $218 million in VMware the following year.

“In the embedded world, virtualization is being used for security,” said Rao Gattupalli, principal architect for networking at MIPS Technologies. “For multiple cores, you can optimize resources by just shutting one core off. But in the embedded world, hypervisors are much leaner and meaner. So in the avionics industry, they’re using hypervisors to actually isolate things. There’s also a lot of development going on in this space in Europe with L4 (microkernels).”

There has been talk for several years about smartphones that can work in both corporate and home environments without mixing data. That approach can be based on a virtualization scheme, where data is kept separate but able to use the same hardware. So far, however, no devices have appeared on the market using this mechanism.

Microkernels, virtual machines and hypervisors
Microkernels have been around almost as long as virtualization. They provide a software layer underneath the operating system that can work across multiple cores to provide basic features such as threading and scheduling.

The concept of microkernels was first introduced by IBM in 1991 with workplace OS, which was supposed to allow software portability across both client and server in the workplace. The company reportedly spent $2 billion on the effort before abandoning it as unworkable. Since then it has gone through multiple iterations, the most recent being the L4 family of microkernels out of Germany.

Virtualization takes a similar approach to utilizing available hardware, but adds even more flexibility with virtual machines and virtual machine managers or hypervisors. Development recently has focused on bare metal (Type 1) hypervisors as well as the more traditional (Type 2) hypervisors that work atop an operating system. But even virtualization isn’t the most efficient use of resources, and in mobile every extra bit of software can affect battery life.

“The main challenge of virtualization is putting in the hypervisor,” said Gene Matter, senior applications manager at Docea Power. “Most virtual machines with a hypervisor are not constructed in a way that it OS power-management friendly because to abstract the underlying layer you have to leave it running all the time. Until the hypervisor is power-aware, virtualization is not the best way to do nothing efficiently.”

Even then, the more efficient approach is to add more right-sized processors on a chip rather than to use fewer processors and more cores per processor.

“The challenge is to pick the proper operating point,” said Matter. “If you can run all the cores at normal speed instead of turbo, that’s better. You’re better off using a multithreaded operation through a scheduler than virtualization.”

Future options
The one exception is offloading part of a task from the cloud that is not latency-sensitive, he noted. “But that’s more of a cloud-based approach where the boundary of the machine is blurred.”

That mixing of data center and client machines may be the best opportunity for virtualization in the consumer space. Mark Throndsen, director of product marketing at MIPS, said the key is ramping for peak performance to save power.

“Throwing a whole core at a problem may be inefficient, but it is an effective way to dial-up secure requrements when necessary,” he said. “It has a role in multi-core and many-core systems, but for consumer devices the role is limited.”