Design For Variability

Averaging is complex and requires a different approach, but the savings in margin, power and performance can be significant.

popularity

By Ed Sperling
Faced with shrinking margins, manufacturing process fluctuations that could mean one more or one less atom in a transistor and proximity issues in layout the most advanced chipmakers have begun designing for variability.

Rather than working with fixed numbers, such as voltage, power and area, the goal of DFV is basically averaging all of these numbers. While this includes some level of margin, the approach actually can reduce the total margin in an SoC. But it also requires a whole different way of approaching a design, which is why the companies that are working with DFV have developed their own tools. So far there are no commercially available tools for DFV.

The law of averages
DFV is not a new concept, even though it has not been commercialized in tools. In fact, it’s been talked about for the better part of two decades. But at 32/28nm and beyond it may be the most cost-effective approach for chip development. Companies like Intel and STMicroelectronics already are developing chips using this approach, with the likelihood that it will become more standardized.

Several technologies do exist in this area, although much if it is considered deep research at the moment. The most prominent of the tools that have surfaced is Razor, which is an open-source version of dynamic voltage scaling. The stated purpose for Razor is dynamic detection and correction of circuit timing errors. Other approaches that have begun filtering into the market in the past couple years are adaptive timing, and some IP is now being offered with parameters of acceptable power.

But while this may be the subject of research, at least some of this technology is showing up in everyday products. Intel now offers burst mode on its processors, for example, which requires sensors to manage the variable power output to make sure the chip doesn’t overheat when additional current is applied to a single core. And as power becomes more of a consideration, so will DFV.

“DFV and design for low power are the same thing,” said Jan Rabaey, professor at the University of California at Berkeley and head of the Berkeley Wireless Research Center. “The goal is to try to let the chip address a lot of these issues and you measure and adjust accordingly.”

One of the interesting side notes of this approach is that it negates the use of corner cases. Designs that are statistical averages have their own built-in margin, which needs to be much more accurate than corner-based designs that work off a worst-case scenario approach. So even though there are margins, those margins in total consume less overhead in terms of power and performance than a corner-based design.

“We’re starting to use variability for average performance,” said Rabaey. “It’s a natural trend to go that way.”

Natural, perhaps. But also very difficult, particularly without the kind of automation that has made digital design so efficient for the past four decades. At the moment, most of the work in this area is being done by large IDMs and researchers. Big EDA companies are watching the trend, but a dozen interviews held at DAC show that so far they either have not seen a profit opportunity or they’re not talking about it other than to concede that it’s difficult stuff. That leaves companies using this approach on their own.

Indavong Vongsavady, digital design solutions and pilot project director at STMicroelectronics in Crolles, France, said the big problem his company has been grappling with is process variation at advanced nodes. A transistor at 22nm, for example, will behave differently with one extra atom of metal—or one less. And with millions of transistors on a chip, it’s almost impossible to ensure every atom will be where it’s supposed to be and that power and performance will not suffer or cause problems.

“Process variation has pushed us to design for variability,” said Vongsavady. “We take that into consideration with our IP.”

ST has begun using this approach at 32nm, which it expects to begin ramping in the second half of 2011. And numerous sources say all of the major IDMs are now experimenting with this model.

New math
Designing chips has always been about math, but it’s moving from geometry to a combination of geometry and purely mathematical models that are floating averages within a precisely defined set of parameters rather than fixed models with a wider range of accepted limits.

This is increasingly true for IP, place and route, synthesis and software, but it also is true even at the manufacturing level where computational scaling will likely supplement restrictive design rules with acceptable limits about variation. In the future the parameters will be narrower but there will be far more of them to contend with much further up in the design cycle.



Leave a Reply


(Note: This name will be displayed publicly)