The Tough Metric: Energy-Efficiency

Sloshing power, dark silicon, system-level concerns, and some painful lessons about standards.

popularity

By Barry Pangrle
Jem Davies, fellow and vice president of technology at ARM, gave a keynote address on Computing Power and Energy-Efficiency Tuesday morning at the AMD Fusion Developer Summit in Bellevue, Washington. His scheduled appearance at the summit led to much speculation and rumor a while back, especially within the context of the ARM versus x86 battle for market share in the tablet arena and expectations that the battle will slosh over to encompass everything from smart phones to servers. He again showed that the expectation going forward is for energy per device scaling to lag behind the actual feature size scaling thus leading to a “dark silicon” phenomenon which I’ve previously blogged about here last September and Ed Sperling wrote about more recently here in May. Mr. Davies also referred to a term “power-sloshing” that was used by Phil Rogers, AMD Corporate Fellow during the first keynote. The idea here is that power (more appropriately, energy) is “sloshed” around on the chip and directed towards areas where it is most needed at any given time.

Two important points that Mr. Davies emphasized during his talk were that 1) It is all about the system, and 2) Energy-efficiency is a key metric. It is critical that all parts of the system are considered when optimizing for power and energy-efficiency. Overlooking any one aspect of the system can wipe out much of the potential benefits that were incorporated into other areas. For example, if the hardware includes all sorts of hooks for power management, but these hooks aren’t effectively utilized by the software, then most of the work on the hardware side will go for naught. For the second point, he showed how early on in the industry the key requirement started as functionality and then became functionality per dollar then functionality per Watt * dollar and finally functionality per Joule * dollar. Mr. Davies stated that the last metric is a very hard metric to optimize, but if you “crack” it you own the simpler metrics as well.

ARM is squarely aimed at “cracking” the tough metric. He also pointed out the importance of the energy used within a system for moving data. This reminded me in part of Bill Dally’s keynote back at DAC in 2009 where he discussed an analysis of the amount of compute resources that could be placed on a chip, and a breakdown of the energy used per computation versus the energy needed to move the data to and from the computational unit.

One last point that Mr. Davies touched on was the subject of standards. His stated philosophy is that it is better to have a small piece of a really big pie rather than a big piece of a really small pie and that, in the end, open standards will win. This seems to be a painful lesson that for some reason we have to keep relearning.

–Barry Pangrle is a solutions architect for low-power design and verification at Mentor Graphics.



Leave a Reply


(Note: This name will be displayed publicly)