Making Software More Efficient

The next big opportunity for saving power is clearly in software, but what to change, how to do it and who’s responsible are far from clear.

popularity

By Ed Sperling
Software is being targeted by most of the major chip vendors and EDA companies as the next big opportunity for saving power, but exactly which software should be modified and by whom isn’t always clear.

To some extent those answers depend upon which part of the software stack vendors or engineers believe can be adjusted most easily, and so far there is no widespread agreement. As with extremely dense semiconductor designs at advanced process nodes, nothing is simple and every decision has ramifications elsewhere in a design. In software, there also is no compiler that can address both coding speed and power consumption. And to complicate things further, there is a strong sense of territoriality about who is responsible for actually writing more efficient code and a strong aversion to the cost and risk associated with writing code where power consumption is a priority.

Nevertheless, there is agreement that the best way to save power will be in the software. Let’s take a closer look at what’s being done and what can be done.

Applications in control or under control?
One approach that seems to be gaining attention is to give applications more control over the basic functionality of the hardware, down to the ability to turn on and off power islands, increase voltage and ramp up performance. The idea is that if software is the driving force in why businesses and consumers choose hardware, then it also should have a major influence on performance and power. Using this argument, hardware should be modified to work more closely with some of the very popular applications instead of trying to run everything through the operating system. But which applications and which part of the hardware?

“The challenge today is to determine what dictates when you turn hardware on and off,” said John Bruggeman, chief marketing officer at Cadence. “If the application can’t control the hardware then you can’t optimize power. The application knows when it needs to amp up the power and turn on a video accelerator or use 3D. The issue isn’t software. It’s how the software controls the hardware.”

He noted that real-time operating system vendor Wind River dabbled in this area prior to its acquisition by Intel last year, but that the work was never really taken seriously by the company.

“The challenge is that this is all part of a connected chain,” he said. “When an application says go to YouTube it sends a signal down in an optimized fashion to get a video resource. But it should be turning off the audio resource at the same time. That problem couldn’t be solved by Wind River and it couldn’t by Intel. The reason is it has to be solved by everyone working together.

The layers in the middle
For the past six decades, control of the hardware has been ceded to the operating system. Applications developers write to an application-programming interface (API) in the operating system, and then the operating system connects into the available hooks in the hardware.

That works from fine from a connectivity and performance standpoint and it provides backward compatibility for applications developers so they don’t have to rewrite their application every time a change is made to the operating system or even the hardware. But it provides absolutely no insight for application developers into how their software utilizes available resources.

The general thinking, at least by hardware companies, is that the operating system as we know it needs to change. Rather than a full-service, general-purpose operating system, the alternative way of thinking about it is a slimmed down scheduler, which is what the “Type 1” hypervisors and ultra-thin RTOS developers have been proposing for the past couple of years.

“There are a number of areas where you can impact low-power services put into the operating system,” said Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys. “You can change the number of interrupt requests, but to do that the software needs to be very aware of where the data comes from. What happens when applications wake up to update data? You get power consumption from the applications checking data.”

One of the advantages of operating systems is that they can make the application independent of the hardware to speed the applications development process. While that helps with coding, it isn’t the most energy-efficient approach. ARM’s acquisition of the engineering team at PowerEscape in 2006 is a testament to this type of change. PowerEscape’s mission was to optimize the efficiency of software and hardware.

“If you can characterize each transaction with a power number then over time you can figure out how much energy you can save,” Schirrmeister said. “So if you take a virtual prototype of an LP versus a GP process, how much does the power usage drop?

Operating systems also can be thinned out or replaced entirely. The strategy behind real-time operating systems is to more closely match the hardware resources available to the software with the services needed to utilize that hardware. Virtualization can do roughly the same thing, but there is an overhead for portability that custom-written RTOSes can avoid. This presents some interesting options for multicore machines where RTOSes can be written for specific heterogeneous cores while virtualization can ramp up the number of cores and ramp them down as needed.

Finally, companies such as IBM have been working for several decades on a more robust middleware that can do everything from corral compute resources as needed to tagging data so it can be organized and searched. More recently there has been attention on how the power in those compute resources is used.

The dominance of hardware
By far, the biggest gains in power efficiency have been at the hardware level. Creating chips with multiple power islands, adding clock gating to reduce leakage and turning down power when chips are not being used has allowed even the most advanced smart phones to get far more out of a single battery charge than ever before.

“When you step outside of the CPU, you save power by turning off devices that are not being used,” said Stephen Olsen, technical marketing engineer at Mentor Graphics. “Ten years ago we used to turn on an Ethernet controller and leave it on. Now some controllers power on to wake up only if it’s important. A small portion of the controller stays awake and then wakes up the rest of the controller if necessary. We’ve got devices hooked to the bus for USB, but if nothing is reading or writing you can power down the USBs.”

The emphasis is on smart use of what’s needed. “No one has gone to the extreme of powering down and caching everything,” said Olsen. “If you power down you introduce latency. But that may be something to consider in the future.”

Virtualization inside of data centers has allowed corporations to turn off entire portions of the data center and run applications more efficiently on servers that have gone from an average of 5% to 15% utilization up to as high as 85% utilization, which is considered the acceptable limit.

What needs to happen next, however, is to bridge the gains at the hardware level with added efficiency at the software level and full characterization of the IP being used in designs, both at the hardware, software and firmware/driver levels. And at least some of it has to be capable of being modified by the end user.

“This is like the old Star Trek episode where Capt. Kirk says, ‘Divert all power to the shields,’” said Cary Chin, director of technical marketing for low-power solutions at Synopsys. “From a top level you need to be able to divert power. The question is whether we are headed toward a model where the user can say what they call important and shut everything else down. In the past that was up to the middle software layer. In the future it may be up to the user to decide how to change the power settings. If you’re on an important phone call and you’re running out of battery you should be able to say what dies and what stays on.”



Leave a Reply


(Note: This name will be displayed publicly)