Rationalization For Power

Why power budgets are forcing some of the most significant changes in the history of electronics design; rethinking every component and process.


By Ed Sperling
Power budgets are becoming almost universally problematic. What used to be a unique headache for the cell-phone market has evolved into an ugly migraine that now includes everything with a battery—and increasingly even those devices that rely on a plug.

The result is a cascade of effects that are widespread and growing. And while the drivers of this effort vary widely from one market to the next, the push toward greater efficiency in everything electronic—and the migration from mechanical to electrical—is creating some radical changes in the design process.

What’s changing
The primary change underway is greater granularity and the elimination of waste. While advanced mobile designs are among the most efficient designs on the planet, there is still much that can be reaped from a device. Over the next several years these additional steps will be required, too. A smartphone that used to handle just voice and e-mail is evolving into what some are labeling “superphones,” with the ability to process high-definition streaming video and audio, connect to WiFi, LTE and LTE Advanced, and run new and much more graphical applications.

There is plenty of capability to add processing power to these devices to boost performance, but just throwing cores at a problem will eat up the power budget, overheat the device and burn through batteries in a matter of minutes. Turning on and off portions of the chip as needed helps, but even that won’t solve the entire problem.

“There’s a lot more that has to be done,” said Eric Dewannain, vice president and general manager of Tensilica’s baseband business unit. “We need to reduce the frequency of certain parts, throw away gates you don’t need, and tailor performance, power and area. We’ve gone from performance. to performance, power and area. To reduce the power you need the smallest power and area.”

To do that requires a deeper understanding of exactly what various components such as a processor actually are doing. A general-purpose processor with four cores provides an enormous amount of processing power, but it’s often overkill for a particular function and most software can’t take advantage of all those cores.

When multi-core chips were first rolled out, they were more a recognition that classical scaling of a single core was doomed rather than a well thought-out plan to utilize multiple cores at lower frequency in unison. The argument at the time was that software would make great strides in parallelism, something it had never been able to achieve in 50 years of programming. And while it’s true that some applications have been threaded onto two, and in some cases four cores, the reality is that most applications people use still don’t utilize all those cores—and most don’t need all the power provided by all the cores.

This has led to renewed interest in heterogeneous cores—each sized according to the needs of the device and the applications that will run on them—with the ability to power down cores when they’re not needed. In some cases that involves deep sleep modes, and in others it means turning them off completely.

“You need to tailor the processor for the task,” said Dewannain. “You may be able to reduce the size of the processor by a factor of two or three that way. To do that you need to know what actually needs to run, what in the architecture you don’t really need, and how to tweak various functions to lower the megahertz. If you have something running in 5 to 10 cycles, you may be able to reduce that to 1.”

Drivers of change
What’s clear is this problem isn’t getting any easier. Roger Smullen, director of strategic marketing at AnalogicTech, said the demand for efficiency is a “constant battle.”

“Our customers are looking for smaller chips and more efficient battery life,” he said. “That’s forcing improvements everywhere. We are implementing light modes, depending on the load required by the system. There’s also a huge push in the industry to move from linear battery chargers to switching power chargers, which are much more efficient. They produce less heat and speed up the charging process. And there’s a need for power management inside of systems where you need less energy to do the same job.”

But shrinking geometries and area causes other problems, too. “The multicore side means you can get better performance, Smullen said. “But the other, non-sexy part is that you have to divvy up power domains depending on noise. That requires a lot more communication between power management and components.”

It also requires a lot more up-front planning about exactly what will be used and how it will be used in devices. One of the baseline comparisons used frequently by power experts in the IC industry is the International Technology Roadmap for Semiconductors, which shows a double-digit increase in power along Moore’s Law if nothing is done to deal with power.

“There is a widening gulf between what will happen with power consumption vs. what is needed,” said Aveek Sarkar, vice president of product engineering and support at Apache Design Solutions. “If you look at the recent OMAP 5 announcement by Texas Instruments, there are 2 ARM processors running at 1.5GHz, 2 cores running at lower frequencies, and graphics. Qualcomm’s Snapdragon has a quad-core processor, a DSP and a modem, plus high-speed I/O and memory. All of this is creating a power budget gap.”

Dealing with that gap between what’s available and what’s needed is forcing changes in how systems are designed, what’s used on an SoC, how it’s used—and whether there is a better way to achieve the same goal.

“The first step is predicting power early in the design,” said Sarkar. “RTL is a good starting point. You want to make sure the number is consistent, so you need to define up front the number you want to have. That will help guide you to where reductions can be achieved. If you have a 1.5GHz chip you can’t use all the functions at once, so you need to develop a power-budgeting flow with an analysis-based approach. Right now power is not part of the design process. It may be functionally correct, but it does not tell you how to shut off what’s not needed. If you have a multi-core design you may shut off the clock to the other cores, for example, but what often happens is you still have data going to these other cores. You need to shut off the data in these cores, too.”

New capabilities, new tradeoffs.
The key in effectively managing power budgets is dealing with power very early in the design—and understanding what works best from a system level rather than from a design expertise or functionality level.

“The one lesson from the ITRS report is that you need to place more emphasis on power decisions at the architectural level,” said Barry Pangrle, solutions architect for low-power design and verification at Mentor Graphics. “The earlier in the design process, the more options that are available. The key is to provide designers with the capability to do more tradeoffs.”

Those tradeoffs aren’t always intuitive, however. While more and more functions—particularly power management—are relegated to software, one large chipmaker discovered that using that approach for one of its mixed-state processors would require a power budget of 50 watts. By doing everything in hardware with custom-designed accelerators, the power budget of the chip dropped to 15 watts.

“The challenge is looking at this from a system standpoint,” said Pangrle. “Mike Muller (ARM’s chief technology officer) said that if you go from 40/45nm down to 10/11nm, you’ve got 16 times the amount of integration that has to be done. If you’re using the same power budget, that’s 16 times as many devices and each needs to consume 1/16th as much energy. The reality is those devices will probably use about one-third as much. We’re seeing more and more functionality moving to software, but you really need to think about what runs better in hardware.”

This puts added pressure on IP vendors—particularly those creating soft PHY—to allow their offerings to be customized for specific designs. The one-size-fits-all approach of highly characterized and tested IP with deep understanding of context is important, but that IP also needs to be flexible for power reasons.

“The key here is flexibility and configurability,” said Vishal Kapoor, vice president and general manager of the SoC realization group at Cadence. “”With hard PHY you have to pick on. But with soft PHY the customer can tailor the IP to the needs of power or performance. We’re seeing more and more effort focused on that in an SoC, and we’re having a lot more discussion with customers and requests in that area.”

Spillover into other markets
The mobile markets have embraced this approach to varying degrees, with smartphone chipmakers and increasingly tablet chipmakers leading the way, followed by a variety of other devices. But it’s also starting to become an issue inside of datacenters, where the cost of powering servers and cooling them can reach millions of dollars each year.

Virtualization was a first step in improving server utilization. Cloud computing is a second. But what cloud services also do is allow companies to rethink what they run and whether they actually need everything in-house—or whether it can be replaced by more efficient applications and server strategies.

This is already an important consideration for whether chipmakers use simulation or emulation to verify their designs using their own datacenters, but increasingly it’s also driving some non-technology companies to consider splitting server functions between Intel’s most powerful Xeon chips and its power-saving Atom processors. Even ARM has made inroads into datacenters with Linux-based applications. And considering that IT departments are extremely cautious about making any changes because of the potential for disaster inside of major companies, this is an almost startling pace of change.

In the mil/aero markets, these kinds of decisions are playing a role in reducing the weight that soldiers have to carry and in extending the range of a variety of equipment ranging from drones to ships. And in the automotive market, the rising price of gasoline is pushing the market to consider a variety of new options.

In all cases, power is now the driver. The race is on to re-think and rationalize every decision in the design process based upon a combination of how devices will be used, how much performance is needed and when, and how to best achieve that goal using as little power as possible.

Leave a Reply

(Note: This name will be displayed publicly)