The concept makes perfect sense for conserving power in sleep mode, but reality is quite different.
By Ed Sperling
Put a fully charged smart phone in a bad reception area and the battery will run out in a fraction of the time it normally lasts in a good reception area. While this may be an annoyance to consumers, who need to recharge their phones more often, it’s a serious problem in SoC design.
Minimum power should be a simple number, but the reality is it’s more like a distribution than something finite. Anything connected wirelessly to an outside communications infrastructure has fluctuations. Throw in dynamic and static leakage, electromigration and a half-dozen other possible problems and it’s uncertain exactly what minimum power really means for each device at any point in time. In fact, the best that can be achieved is to minimize the amount of power being used at any point.
“Min power all depends on the operation mode,” said Jay Chiang, Min Power marketing manager at Synopsys. “If the designers of a phone designed it right you should be able to put it in your pocket and shut down everything except the wireless receiver so only the 3G modem is running. But that isn’t as clear-cut as area and timing. There are different kinds of circuits and the techniques needed to save power are different.”
The tradeoffs in synthesis are particularly tricky. And if a circuit is unique and different from another one being used to switch a signal, then switching in one direction may produce a different power result than switching it in reverse.
“In the past, customers would implement low-power techniques outside the flow and everything would work,” Chiang said. “That’s no longer easy. There are a lot of tradeoffs involving intricate interactions. Although the Min Power idea is easy enough to understand, it’s hard to build into an existing flow. But we do see it as something that will grow in importance.
Different angles
One solution being used more frequently is to offload certain functions to very specialized chips with their own set of embedded drivers, codes and software. These kinds of approaches were in ready supply at the Consumer Electronics Show last week.
“The potential is for people to look at more specialized hardware or specialized functionality instead of putting everything on one processor,” said Barry Pangrle, solutions architect for low-power design and verification at Mentor Graphics. “That way you can put one part to sleep and wake it up instead of the whole thing.”
This approach is at the opposite end of the spectrum as a general-purpose processor using a virtualized approach to software. Rather than specific functions running on specific cores or low-power chips, virtualization takes advantage of whatever resources are available. As a result it will always have higher overhead because more resources have to be available and managed.
Also important to this equation is an understanding of the effect of switching and performance on battery life. Not all switching is necessary, and not all functions require the completion of a computation in nanosecond less time.
“If you back off on performance you get a big win from an energy standpoint,” said Pangrle. “The amount of energy required to do one 32-bit addition is significant in terms of overall battery life. There are a lot of applications, such as medical, that don’t have to run at 1GHz. And in a cell phone, some pieces only occasionally have to run at maximum performance.”
All of this affects the upper limit of power and performance, but it also affects min power. The lower the processor frequency, and the less frequent switching, the lower the min power. And when it comes to battery life, more companies are beginning to think about just how to keep that min power at the bare minimum.
Leave a Reply