Moving Targets

The future success of DVFS and near-threshold computing look dubious—at least for SoCs.

popularity

There is a very close correlation between power and complexity in an SoC. The more functionality that is required to meet market demands, the greater the need to push to the next process node in order to fit it all onto a single die. The result is more power density, and more attempts to limit the effects of that density with power islands, different voltages, gating, and a variety of other techniques.

Two of those techniques have received a fair amount of attention. One is dynamic voltage and frequency scaling. Rather than using a set voltage, the voltage can rise and fall as necessary depending upon the compute task to maximize efficiency. This already has been proven in processors, where the technique is relatively well established.

A second technique is near-threshold computing, whereby compute operations are performed even before a processor is fully powered up, and may even shut down before it’s fully operational. Intel executives have talked about this on multiple occasions, and it may be a useful tool in very regular layouts.

But these approaches are becoming far less interesting for SoC developers, including those at the advanced process nodes. The reason is the growing complexity of power-related issues. It’s hard enough to come up with power estimates and power budgets for components in an SoC without having to build in sliding scales on top of those numbers, and it’s almost impossible to achieve good coverage and signoff on a chip whose power numbers fall into a distribution rather than hard numbers based upon well-defined modes of operation—on, off, and various stages of sleep.

The solutions that seem to be gaining the most traction for energy efficiency focus more on partitioning of processing and functions into independent subsystems—or at least quasi-subsystems—so there is less contention for memory, and less difficulty in verifying and characterizing them. That also makes it simpler to divide and conquer the engineering for a complex SoC, and it provides more manageable fixed numbers for dynamic power, leakage, and thermal modeling.

The focus on power clearly isn’t decreasing. In fact, quite the opposite is true. But the best choices for what makes the most sense for bringing a working chip to market quickly, on budget, and within acceptable power limits are shifting. While some of the most advanced techniques make sense at 40nm, or with certain types of chips, the advent of fully depleted SOI at 28nm and finFETs at 20nm renders them far less attractive in comparison. They can severely impact time to market and greatly increase the number of unknowns that make verification more difficult.

It’s not that engineers can’t make these miraculous advanced power-saving techniques work. It’s that they can’t make them all mesh on a densely packed and incredibly complex SoC in the amount of time they’re allotted. And if something has to go, the logical choice is more uncertainty.

—Ed Sperling



Leave a Reply


(Note: This name will be displayed publicly)