Low Power Drives New Architectures

Changes in how to address power are lowering overall costs, improving reliability, and having a significant effect on the design process.

popularity

By Pallab Chatterjee
Power became the driving discussion at several major events last month.

The global cries for energy reduction, which have been mainstream since the early 1970s on the political level, have now moved to being real economic realities for component and systems suppliers. Chipmakers are finding that lower power makes good economic sense—lower cost of packaging, lower cost of ownership of the products, higher reliability and, most importantly, the differentiation in power reduction methods is resulting in a lower cost of sales for the products as it is increasing the customer retention.

Once a methodology is selected for the chips, it is carried through to the board, then the system and eventually the software that runs on it. This makes the cost of changing the power method very expensive and typically keeps the customer on multiple generations of hardware and components from the same suppliers under the same software umbrella.

The Hot Chips conference featured several dramatic network and multicore server products that all had enhanced power management. The power management formally was multiple rails (I/Os and cores) and sometimes a thermal shutdown. The new systems are pervasive to the point that architectures are created with equal attention paid to power management and data throughput. The features shown were multiple power supplies, variable power voltages, block-based shutdown and turn-on, new circuits to minimize turn-on/turn-off, alternate clock tree distribution systems, lower power PLLs and clocks, and even new logic methods.

Fulcrum presented a 1 billion packet/second frame processor, which ended up being a case study for the applicability of non-synthesized sequential logic or asynchronous design. The logic structure, while known in the past, has never been implemented in such a large-scale application before, and the results included not only better performance but a power envelope that was task-acceptable.

Similarly IBM, Intel, Tilera and Cavium presented next-generation many-core designs with performance targeted at application needs over the next 5 to 10 years, but with power profiles at levels similar to chips of many decades back. The general rule is that power per transistor in these designs is less that 100 times what it was five years ago.

On the system side, data centers are the driver. Dell addressed the issue of power reduction for its servers by not just swapping components, but also re-qualifying the systems to work at extended temperature ranges. This means peak air temperature can be as high as 113 degrees Fahrenheit (45C) for its servers without sacrificing performance or warranty. This increase from 80 degrees Fahrenheit means there is no need to provide chilled air to cool the machines. The cost of the environmental air is generally equal or greater than the cost of the energy to run the servers.

To keep the component power down, these servers use new 30nm DDR3 DRAM from Samsung, which are now down to operating at 1.35V from 1.8V. The reduction in the power supply, and the reduction of geometry to make the devices, provides higher performance, higher density and an overall reduced power envelope. Google has noticed that by using virtualized machines and high DRAM on its servers it can eliminate the power from rotating media and go to mostly high-memory machines. This architecture systematically drops power at the data center level by double digit percentages and provides an increase in performance. The performance increase allows for the implementation of new features such as “instant search” while a user is inputting the full search field.

Facebook, which is new to the game on hardware, took a fresh look at power and started not with the chips, the memory or even the board, but with “how is the power getting to the computers?” It was able to provide a 12% to 15% reduction in power by looking at and redesigning the power supply input (408V to 24v signal path) and eliminating the UPS in its servers. This is a new area of high-power and high-current design that companies need to think about and look at. Facebook also ended up changing the board designs for the base compute server modules. Information on the Facebook approach and other areas to address the power can be found at OpenCompute.

Power as defined by the EDA community, which is “dynamic peak power in active mode,” as well as in idle mode, multi-mode and transition, and even infrastructure, will all play key role in next-generation low-power design.



Leave a Reply


(Note: This name will be displayed publicly)