Low-Power Crisis = Danger & Opportunity

popularity

If you’re a student of these things, you’ve no doubt heard that in Japanese, the word “crisis” is divided equally into “danger” and opportunity.”

The biggest opportunity for electronics designers is also their biggest challenge: power management. Ask anyone today and they’ll tell you that minding and managing power consumption and leakage is a big concern.

How big?
At DAC this year Sanjive Agarwala, TI fellow and director of worldwide silicon development at Texas Instruments, and Scott Runner, vice president of advanced methodologies and low-power design at Qualcomm, said it’s not big. It’s huge.

Illustrating the power challenge, Runner showed a plot of the relative performance increase of the CPU, GPU, and memory bandwidth over time, compared to the power savings provided by process node shrinks.

“Process scaling is insufficient to support the increase in performance that’s required to enable all these exciting new applications,” he said.

Brian Bailey writing on this site last month put it bluntly: “Power is now the No. 1 target in developing chips.”

Recently I interviewed Rainer Holzhaider, project manager for technology development and Technical Board member at ams AG (formerly Austria Mikro Systeme). Holzhaider has been with ams for more than 30 years and, as such, has had a ringside seat to watch and participate in the electronics design.

He recalled the so-called “Gigahertz Race” of the 1990s, during which each new technology generation enabled a doubling of microprocessor speed.

He said: “This suddenly stopped around 2005 at a process speed of 3-4 GHz. The reason? Power. Significant processor speed improvements at the expense of power simply became unfeasible. So we accelerated the concepts of parallelization, which of course requires software partitioning as well.”

But perspectives are changing in important ways; at least that was my take-away from last week’s keynote sessions at ARM TechCon.

What do I mean by that?
Five or more years ago, the conversation got noisy over power-management on silicon. Finer and finer geometries were causing both static and dynamic power leakage. The onus rested squarely on the chip architect’s shoulders.

But today the conversation has become—in my humble opinion—a little more realistic: It’s the village’s problem.

And so we begin to work together to solve the problem because the silver bullet targets no one single point of failure or problem.

That occurred to me listening to Martin Fink, CTO of HP Labs, and Simon Segars, ARM CEO, during their separate keynotes at ARM TechCon. They each talked about the Internet of Things and their individual messages were in sync: The growth of data in the mobile network is unsustainable at north of 50% a year—and that doesn’t really take into consideration the expected explosion of IoT data in the coming years.

Given that computing and networking architectures are 50 or more years old, patching problems is a non-starter. But rethinking the network isn’t. To help the network scale, we need to rethink memory, interconnect, power and other issues right now, they said.

So solving the power consumption problem isn’t one team’s problems. As both Runner and Agarwala agreed at DAC, it’s a system-design—not a block, IC or board problem.

It’s human nature to point fingers when a problem arises and deflect blame. But it’s also human nature to work together to solve big problems when everybody agrees it’s no one’s individual fault and that doing nothing will be the big problem.



Leave a Reply


(Note: This name will be displayed publicly)