A Broad, Effective Approach to Optimizing for Power

There are more ways to improve power than just adding finFETs

popularity

As an industry we talk a lot about the challenges of power-aware design and accompanying issues at leading-edge nodes. There’s no denying some tough challenges, but if we’re honest, there are plenty of opportunities we can exploit right now to improve power in our designs.

You’ve heard the saying, “death by a thousand cuts?” Well, when it comes to grappling with power in today’s SoC designs, it’s optimization by a thousand methods.

In many ways, our collective situation was made much better by the introduction of finFETs in the past couple of years. This has helped push the explosion in leakage issues from the headlines for at least a little while.

But there are additional ways to improve power, ways that require engineers in some instances to step out of their comfort zones and in other instances to rethink old ways of designing.
Take advanced process nodes. There is a lot of granularity in the finFET processes, including multiple Vt options, channel lengths, bit cells and sometimes even poly pitches. When you combine that with higher performance and lower leakage of finFETs, you get the best of both worlds.

The beauty of the finFET process is that you can reduce your useful operating voltage down to the 0.5V range, which gives you a powerful knob to turn to reduce overall power. This also helps to counter the higher “fin” capacitance of finFET devices. This higher capacitance is the major reason that the dynamic power scaling we got node after node with planar technologies isn’t the same with finFETs. But if your design can take advantage of the lower useable voltage—and remembering that dynamic power is proportional to the square of the voltage—any supply reduction of even 100mV results in a considerable reduction in dynamic power without the big impact on delay seen in older nodes running at lowered voltages.

On the IP side, we traditionally dealt with 7,9 or 12-track standard cells. Now, we determine track height by the number of p and n fins we can fit into a cell. Typically, we have the options of 3/3, 4/4 or 5/5 usable fins (each X/X indicates the number of p and n fins). This results in three unique track-height libraries to continue the power and performance granularity on the standard cell side. On the memory side, you can use the different bit cells available to generate instances for high performance or for low power, which can be combined with different memory periphery options selectable in the compiler GUI.

Then there are voltage domains. At 65nm, we had one voltage domain with perhaps a couple of odd corners. At 28nm there were three or four voltage domains, while in finFETs there are five or more voltage domains. This puts the onus on IP providers to spend a lot of compute resources and characterization effort on their IP, but it gives designers a huge amount of freedom.

One issue that gets framed as a challenge is the “process bifurcation” that we’re experiencing today. There is the traditional demand for leading-edge process development on the one hand, while on the other hand enormous demand for more established processes such as 55nm, where IoT designs can exploit cost benefits and process-maturity. Can we as an industry optimize effectively for power on both sides of that process development?

This is actually less of an issue than some make it out to be. The lessons of advanced node development have, if you will, been “back annotated” to established nodes to optimize for power. Techniques that once were exotic or costly—for example, read-write assist for bit cells at low-voltage operation—are more standard now. At the same time, EDA vendors have done a great job of delivering better tools for power optimization in recent years.

Then there is work on the physical IP side, particularly at ARM, to build a closer link between the processor and physical IP for better power optimization. This includes developing custom memories and custom logic to improve performance at a given power or reduce power for given performance.

From where I sit, however, the biggest challenges we face now as designers are, in a way, human. Do we, as engineers, think enough about using all the power knobs available to us to optimize each of our designs?

Power saving doesn’t come automatically or for free. Designers need to incorporate it throughout their system design. The complexity of today’s leading-edge designs has not only to do with billions of transistors, but also how to use all the features at our disposal. This is a big challenge.
The other human issue is engineering culture. We’ve gotten far more specialized as engineers and engineering teams over the past three decades. But today’s power-optimization efforts require a holistic way of thinking about what parts of the design affect what. We need to cultivate more “tall engineers” who can think about and address all these areas of potential power optimization.



Leave a Reply


(Note: This name will be displayed publicly)