Dealing With Variability

Near-threshold and sub-threshold operation will require lots of work and ingenuity on the design side.

popularity

By Barry Pangrle
Process, voltage and temperature, a.k.a. PVT, are well known to designers who are working to complete “signoff” for their designs. In order for a design to be production-ready, it’s necessary to ensure that the design is going to yield parts at a sufficiently high percentage for profitability and that it will still operate within the expected variation of the process and environment.

In the quest for ever-more energy-efficient designs, lowering operating voltages is a promising approach to completing that quest. Because it’s a well-known fact that dynamic power is proportional to the voltage squared, it may be surprising then that Vdd has not really dropped very much for most designs since we crossed the 100nm threshold almost a decade ago. For performance reasons, Vdd has generally been set at roughly 3x-4x the threshold voltage. In order to keep leakage in check, threshold voltages have stopped scaling so, the question then is, how much performance do we lose by continuing to scale down Vdd with relatively unchanged threshold voltages and what other factors come into play?

In a paper published in the January 2009 IEEE Journal of Solid-State Circuits, Himanshu Kaul et. al. from Intel described a 320 mV implementation of a motion estimation accelerator in 65nm CMOS. Some of the material was then also presented by Greg Taylor at the 2009 EPEPS and can be found here.

barry1
Figure 1. Frequency variation with temperature. Source: Intel

Figure 1 shows the results of how performance is impacted by voltage and temperature. Note that when operating at 1.2 V, the difference in Fmax from 50°C is only ± 5% when the temperature varies from 0°C to 110°C whereas at 0.320 V the variation is ± 2x. This is a huge difference to have to compensate for during the design process and is too large to just try to “margin” into the design. Another point to take into account here is the slope of the curves and how much steeper they are at lower voltages. If I just eyeball the graph for a rough estimate, it looks like about a ± 0.05V difference around 0.320 V at 50°C will also lead to about the same ± 2x variation in performance. These designs are incredibly sensitive to any voltage fluctuations

barry2
Figure 2. Frequency variations across fast-slow process skews. Source: Intel

So Figure 1 gives us an indication of the voltage and temperature impact, but we haven’t looked at process variation. Figure 2 shows how process variation affects performance. Again we see that the impact on performance due to variation is “magnified” at lower voltages. Process variation when operating at 1.2 V accounts for a ± 18% change in performance whereas at 0.320 V it accounts for another ± 2x difference in Fmax. It should be clearer now why everyone wasn’t immediately rushing to run at ultra-low voltages. Designing complex chips is hard and designing them to run at really low voltages is harder.

How about the promise of new process technologies? The most radically different new process technology in high-volume production today is Intel’s 22 nm Tri-Gate CMOS and I’ve referenced the figure below in earlier articles (here and here).

barry3
Figure 3. 22nm Tri-Gate vs. 32nm Planar. Source: Intel

Figure 3 certainly provides hope that there may be some promise for reduced voltage levels in the newer process technology. Perhaps unsurprisingly though for such a radically new process, there are questions about variability. In a blog on the GSS site, there are diagrams and simulations based in part on the TEMs here from Dick James’ Chipworks blog.

Figure 4 below, from Chipworks, clearly shows the process variation between transistors with some being more rectangular and others more triangular. According to the GSS simulations and Professor Asen Asenov, the rectangular transistors perform better and to my mind seem much more like a true “Tri-Gate” transistor. Professor Asenov also is quoted here as saying, “I think Intel just survived at 22nm. I think bulk FinFETs will be difficult to scale to 16nm or 14nm. I think that SOI will help the task of scaling FinFETs to 16nm and 11nm.” So before victory is declared, it appears that there is going to be plenty of work to keep the process engineers busy going forward. The additional complexity will certainly impact the economics of these newer nodes as well.

barry4
Figure 4. TEM Image of NMOS Gate and Fin Structure. Source: Chipworks

The road to near-threshold and sub-threshold operation also requires a lot of work and ingenuity on the design side. Circuits can be designed to have better characteristics for withstanding variation but often at a cost in area or performance or both. Of course, if the design techniques reduce the need for margining then the practical useable performance should improve. There will be a lot more study in these areas to help bring near and sub-threshold designs to market and to deal with variability. From a process standpoint, it appears that variability will continue to be an important issue for some time.

—Barry Pangrle is a solutions architect for low power design and verification at Mentor Graphics.



Leave a Reply


(Note: This name will be displayed publicly)