New Incentives For Lowering Power

Economics, increased functional density and a bunch of new approaches are increasing the demand for lower-power designs.

popularity

By Ed Sperling
Despite all the focus by design teams on lowering power over the past few years, in many applications power is still the last consideration for many companies in the power-performance-area equation. That’s beginning to change, however, even for applications that in the past have not been particularly power-sensitive.

There are several reasons for this shift. No. 1 on the list is the business opportunity, and it’s becoming evident across a variety of markets. In the mobile market, the clamor for longer battery life already has been heard and accepted. In the enterprise computing market, that opportunity is only now being explored.

“The enterprise and the mobile are the same,” said Simon Segars, executive vice president at ARM. “Everything is power constrained. As we go forward, all problems will look like mobile problems.”

ARM has been talking about the need for more efficiency in data centers for past couple of years, cementing deals with AMD and other chipmakers in the past few months that could help drive sales there. What’s changed inside of IT departments to make this happen is a drive to improve energy efficiency even further. The first phase of that effort was to consolidate servers through virtualization. The next phase will be to improve the efficiency of the servers themselves, with lower-power chips and some of the same dark-silicon approaches that have worked so well inside of mobile devices.

ARM isn’t alone in recognizing this opportunity. Intel and IBM hhave been developing far more power-efficient implementations of their chips, and IBM is developing system-based power savings capabilities that go well beyond just the chip.

New packaging, increased density
A second reason why power is becoming more important has to do with increasing density. This is more than just the effect of Moore’s Law and shrinking features, though. It has to do with adding more functionality onto an SoC, including more processors, more memory and more wires.

It’s the wires and interconnects, in particular, that are causing problems. While it’s theoretically possible to shrink transistors into Angstroms, wires start encountering the laws of physics. Pushing electrons through an extremely thin wire raises the resistance, and subsequently the heat. The interconnects, meanwhile, start encountering scattering effects of copper when it touches silicon.

Just the routing congestion around a processor core or memory is enough to make most SoC designers wince. But add in a stacked die architecture and the thermal side effects of power can kill a chip—or more accurately, several of them at once. The problem is that computational density is increasing even if there is more room on a piece of silicon. There are more IP cores, the speed of the cores is increasing and operating those cores is creating heat.

That demands system-level solutions, and one of the alternative approaches being considered is stacked die. Gregg Bartlett, CTO of GlobalFoundries, called stacked die the “third knob to turn” after new materials and more intelligent power distribution. “This will appeal to certain market segments such as mobile, although there are still a lot of integration challenges.”

New technology, new approaches
A third reason for this shift in perception has to do with an array of different ways of viewing power and the possible solutions.

One approach involves the on-off cycle for processors, and there are a couple different aspects to this. One involves turning things on and off quickly, which actually consumes less energy than running it slowly at low voltage

“Each gate consumes less power if you run it slowly, but you need more cycles to complete a task,” said Will Ruby, senior director of technical sales at Apache Design. “You either run at just enough power and dynamically lower the voltage to just meet timing, or you raise it to the point where the chip is functional. But you need to do these kinds of fundamental architectural tradeoffs early.”

One permutation of this approach involves near-threshold computing, which Intel has been working on for advanced process nodes. Rather than actually ramping voltage up all the way, computing is done during the ramp-up process and sometimes shut down before it fully ramps up. That’s a more complex scheme for accomplishing the same goal, and can have a significant impact on the overall energy consumption.

But there also are studies under way for some applications to rethink how computing is actually done and what the ultimate goal is. Stanford University and the University of California at Berkeley both are focusing on arrays of low-power chips that are not 100% accurate, but which as a mesh network provide an acceptable level of accuracy. In something like search or social media, that should be enough to satisfy most users, and it could save enormous amounts of money in data center operating costs.

A second permutation has to do with more intelligent storage. Cary Chin, director of marketing for low-power solutions at Synopsys, said the big challenge right now is where to store data.

“We’re in the middle of a shift right now,” he said. “ We may end up with a model where a lot of the storage is online in rented space, but the economics are not clear about what people are going to want to archive online. This started with cloud-based e-mail.”

The tradeoff from a design standpoint in this equation is storage and access versus communications. In some cases, it’s more efficient to store data locally, while in others it’s more efficient to use a communications network. Chin cited Apple maps, as an example, where not everything is streamed to the mobile device. “A lot of this will also depend on connectivity coupled with enough local processing power to make it work,” he said.



Leave a Reply


(Note: This name will be displayed publicly)