Dealing With Sub-Threshold Variation

The value and challenges of circuits being not quite on or off.

popularity

Chipmakers are pushing into sub-threshold operation in an effort to prolong battery life and reduce energy costs, adding a whole new set of challenges for design teams.

While process and environmental variation long have been concerns for advanced silicon process nodes, most designs operate in the standard “super-threshold” regime. Sub-threshold designs, in contrast, have unique variation considerations that require close attention — and even architectural differences — in order to manage that variation.

“Sub-threshold analysis is very challenging, principally because of the variation component itself,” said Brandon Bautz, senior product management group director, digital and signoff group at Cadence.

Sub-threshold design has value both for battery-powered devices and the data center. Dan Cermak, vice president of architecture and product planning at Ambiq, spoke to the battery-powered applications: “We’re seeing new form factors where they’re trying to make them sleeker and more elegant, and that has an impact on reducing battery capacity,” he said.

Meanwhile, Priyank Shukla, staff product marketing manager at Synopsys, has a data-center focus: “We are saving megawatts,” he said.

Sub-threshold designs may involve a mix of sub-threshold, near-threshold, and super-threshold domains within the same design. They have high sensitivity to variations in voltage, temperature, and process, although some are easier to manage than others. Because sub-threshold designs are far less common, organizations may need to develop their own methodologies to ensure silicon that works across the broad range of possible environments and processes.

That extra effort can have a significant payoff, however. Shukla claims that power can be cut anywhere from one-fifth to one-fifteenth of what it would be for a standard super-threshold design. However, along with the power reduction comes a performance reduction. If the performance degrades too far, the longer time spent completing a task will overwhelm the lower power, resulting in a net increase in energy consumed.

Staying below or near the threshold
“Standard” digital chip design involves transistors that in theory can exist in one of two states, off or on. There’s a threshold voltage, usually somewhere in the vicinity of 0.5V (but dropping with advancing nodes) that determines where the transition between on and off lies. Above the threshold means on, below the threshold means off.

It’s not quite so simple, of course, because below the threshold the transistor isn’t completely off. Depending on where the voltage is, leakage still occurs, and it’s long been a process and device goal to minimize that leakage.

But that not-quite-off behavior can be leveraged, because there are degrees of not-quite-off. Compared with standard design, the currents are much smaller — you’re playing with what would otherwise be considered leakage. But there are exponential differences in the amounts of leakage, and those differences allow designs to be created that use more or less leakage to establish the 0 and 1 states instead of simpler notions of on and off.

This regime is called “sub-threshold” because all voltages fall below the transistor’s threshold voltage. For such designers, standard circuits are referred to as “super-threshold” since they have voltages that are clearly far above the threshold voltage.

Fig. 1: Simplified I/V curves for transistors, with variation. On the left, the sub-threshold regime all indicates a logic low for digital designs. On the right, both the logic high and low are in the sub-threshold regime. Source: Bryon Moyer/Semiconductor Engineering

Fig. 1: Qualitative I/V curves for transistors, with variation. On the left, the sub-threshold regime all indicates a logic low for digital designs. On the right, both the logic high and low are in the sub-threshold regime. Source: Bryon Moyer/Semiconductor Engineering

There’s yet one more regime where voltages do go above threshold, but not by much. These are referred to as near-threshold designs. Sub-threshold and near-threshold designs, because they deal with much lower currents, can be used to reduce the energy consumption of devices. But complete devices often employ a mix of sub-, near-, and super-threshold circuits.

Sub-threshold chips often must interface with super-threshold chips, so part of the need for super-threshold circuits is to allow that interface. Ambiq, for example, known for its sub-threshold designs, uses 1.8V both for I/O and for wake-up logic that needs to be alive before the various power regulators in the chip stabilize. In addition, in some modes, some of its regulators work in the super-threshold regime.

Analog circuits also may use sub-threshold circuits in select areas in order to reduce overall power. In fact, sub-threshold analog circuits appear to have been around longer than their digital counterparts. “I started analog design 13 years back, and we have been doing this since then,” said Shukla. He said that Synopsys’ SerDes designs, in particular, have portions of sub-threshold circuitry to keep the power in check.

Exponential sensitivities
While numerous effects reflect process or environmental variation, they all boil down to one thing — they affect the threshold voltage. And when you’re operating around that threshold, that can have a dramatic effect on behavior. “The transistor current depends exponentially on the gate and source voltage,” said Shukla. “A slight variation in threshold voltage leads to an exponential increase in the current.”

One of the biggest challenges to embarking on sub-threshold design is the fact that standard transistor models provided by foundries have, until recently, focused almost exclusively on super-threshold operation. They acknowledge leakage below threshold, but not with the precision required to design with them. So the few companies that have taken up the sub-threshold challenge have had to characterize these devices to create their own sub-threshold models and standard cells. “The in-house standard-cell design team will come up with a new library that supports sub-threshold operation,” said Shukla.

Even now, designers may find that foundries will proceed cautiously. “There’s a risk that some foundries won’t support it unless it’s a customer whom they recognize and who can handle and manage what potentially could be really bad yield,” said Josefina Hobbs, product marketing manager for logic libraries at Synopsys.

Part of the sub-threshold characterization must include how behavior changes as different parameters are varied. Process variation has been a huge consideration for some time, but the nodes where sub-threshold has been taken up — like 40nm, in the case of Ambiq — lie at the edge of where variation has been a concern. They’re now using 22nm, as well. But even here, if you’re characterizing behavior, then an “off” state for a super-threshold design — meaning below threshold with leakage current — requires understanding how that leakage current might vary.

If the entire operation of the design is below threshold, it’s not just a matter of understanding the range of possible leakages below threshold. Now one needs to know the range of leakages both at the 1 and 0 states that can occur with process variation. “Super-threshold conditions are more classical in terms of optimization,” said Cadence’s Bautz. “Their variation is less relative to the nominal delay.”

Then there’s environmental variation — voltage and temperature, in particular. Sub-threshold designs are highly sensitive to both. “Most voltage variations tend to scale down with voltage,” said Scott Hanson, CTO and founder of Ambiq, suggesting that this makes voltage variation easier to manage. “But we are hypersensitive to temperature variations and to process variation.”

Chips often play a part in their own operating temperature due to self-heating. When standard higher-powered chips are turned on at -40-°C, they must be able to work. But once they start operating, they’ll generate their own heat, raising the internal junction temperature above the cold ambient temperature.

Sub-threshold designs, by contrast, use so little power that self-heating can’t be counted on to raise the temperature. So these designs will need to be able to operate at cold extremes for far longer periods of time than super-threshold designs would. “We’re not self-heating to the point where we’re running at 85°C all the time,” said Hanson. “But it does mean that we can run at -40° for an extended period.”

Mitigating variation
Dealing with variation in a manner that hasn’t explicitly been addressed through standard EDA tools and flows is difficult, but essential. “Without management, you end up with a chip that might run at 1 MHz in one corner and run at 1 kHz in a different corner,” said Hanson.

The benefit of figuring it out is that it becomes part of your competitive advantage. That means that companies specializing in sub-threshold design are unlikely to talk openly about their specific methodologies. Still, some aspects can be broadly addressed.

With digital logic, the real work lies in the creation of standard cells. “Challenge number one is to verify and improve the accuracy of nominal models for sub-threshold operation,” said André Lange, group manager quality and reliability at Fraunhofer IIS’s Engineering of Adaptive Systems Division. “Challenge number two is adding variability such that the models predict process variations well.”

Characterization must be carefully managed to ensure that those cells work across all of the possible variations. “The tool to characterize the variation is completely different,” said Synopsys’ Hobbs. “The timing and power analysis are completely different.” Library sign-off is more exacting for sub-threshold versions.

One of the critical developments making this possible is the augmentation of the LVF (Liberty variation format) file to address what are called “moments” in the statistics of variation. “We’ve had this evolution of the modeling of variation,” said Hobbs. “And the latest one is not just LVF, but moment-based LVF.”

Because distributions tend not to be Gaussian, moment-based LVF provides three new parameters needed to describe those distributions. Those are mean-shift, variance, and skewness to the format (the first, second, and third statistical moments, respectively). Others could be added as well: The fourth moment would be kurtosis, which deals with the distribution tail.

Fig. 2: A non-Gaussian distribution with the first three moments indicated. Source: Synopsys

These enhanced models let EDA tools do a better job of taking real variation distributions into account when predicting signal delays and power. Characterization, however, must populate that new data in order for the models to be effective.

Clock and power domains
The existence of “domains” or “islands” on a chip can help with design closure. “In a typical chip, we might have 100 or 200 different clock domains, and you have to be careful about which ones are synchronous and which ones are asynchronous to one another,” said Hanson. The main driver for this is better granularity in reducing power. But it also provides both a benefit and a challenge.

The benefit is that, the smaller the extent of a clock domain, the easier it may be to achieve variation-aware timing closure within that domain, particularly if the domain is largely localized into an area on the chip. The challenge is that domain crossings will need to be proven out across variations.

Even more of a challenge are power domains, and this is where the mix of sub-, near-, and super-threshold circuits comes into play. In some modes, the regulators run sub-threshold, but in most modes, the regulators are using fully saturated transistors that are in the super-threshold domain,” said Hanson.

Each domain, and the regulators that define the operating voltages, must operate and inter-operate in the presence of variation. A regulator should, almost by definition, deal with voltage variation, but process and temperature variation must still be addressed both in the regulator design itself and in the power domain that the regulator controls.

“We’re doing adaptive voltage scaling, much like a high-performance processor might do in the face of temperature variations, although our sensitivities are different and the way we manage it is different,” said Hanson. “You start thinking about all the combinations of power domains and clock domains, and it gets very complex very quickly. Each of those domains has different timing sensitivities. And when you pass clocks that communicate across those voltage boundaries, it can lead to really challenging closure. The 1.8-volt domain is implemented with thick-oxide devices. They’re relatively insensitive to process variations. At the same time, you’ve got low-voltage sub-threshold or near-threshold domains that are implemented using these tiny fast transistors, and they’re highly variable. And so how to get those two domains to communicate with one another in a resilient way can be quite challenging.”

Given a sub-threshold-friendly standard-cell library, the digital designer can specify those cells through RTL in combination with a UPF (unified power format) file to define any power islands. The conditions specified in creating those cells must be validated in the design. Those conditions often involve rise and fall times. “Slight variations in VT change the rise time and fall time,” said Synopsys’ Shukla.

Added Hobbs: “You’re going to have a certain set of assumptions when you model these library cells. And so the designer’s job is to make sure that the design adheres to all of those assumptions.” That aside, digital design proceeds as it would with super-threshold design.

Those rise and fall times also can affect the performance of the chip. That means that a given task may take significantly longer — and that delay may vary widely. If the delay gets too long, then the extra time it takes to perform a task may more than compensate for the lower power, increasing the net energy consumption. Synopsys ran a number of simulations to demonstrate this effect.

The impact of variation also can be reduced by scaling the transistor gate. Dimensional variation is relatively constant on a given layer, so using a wider gate makes the variation a smaller percentage of the size, reducing its effect.

Using high-VT transistors can help by moving the “low” and “high” states farther apart. “If you can use high-VT devices, then you’re in better shape overall,” said Bautz. “If you have to meet a performance target simultaneously and you go to low VT or even ultra-low VT, that’s where you’re really seeing the maximum amount of variation. if you’re not operating at the super-threshold regime, I would certainly stay away from ultra-low-VT or low-VT cells.”

FD-SOI designs have the additional benefit of back bias, which adds another “knob” for countering the effects of variation.

Compensation circuits also may be used, particularly for analog circuits. But because analog design deals with establishing operating current/voltage points, the threshold is just another point on that curve, so the greater variation makes this more of an evolutionary move than it is with digital design.

FinFET nodes and beyond
Process variation becomes a bigger problem at advanced nodes, so tools for handling variation have improved even for super-threshold design at the most aggressive silicon nodes. It therefore would be natural to assume that doing sub-threshold design in the finFET realm would prove even more challenging.

That’s not necessarily correct, though, because of the steep sub-threshold slope of finFETs. That slope defines how much margin exists between a 1 and a 0. The steeper the slope, the greater the margin, making it easier to ensure that, even with variation, the 1 and 0 ranges don’t ever collapse together.

Analog design — needed at the very least for regulators — is said to be more difficult to do with finFETs because the gate width is quantized. You can have an integer number of fins to widen the gate, but nothing in between.

Ambiq noted, however, that its planar designs already use “fingers” on gates, with “wider” gates involving more fingers. “On the analog side, we’re using really big transistors,” said Hanson. “We’re already using multiple fingers there anyway.” Since this is already a quantized approach, fin quantization becomes less of a novelty.

Synopsys tries to use two or even three fins on its standard-cell transistors to average out some of the variation. “The problem with finFETs is that the smaller you get, the less you are able to keep them straight and clean,” said Hobbs. “We try to stick to at least two fins — and, in some cases, three fins — to mitigate the impact of variation.”

Moving from finFETs to gate-all-around (GAA) transistors may improve matters. “Gate-all-around is less susceptible to variation than the traditional finFET,” said Hobbs. “It has more [channel contact] real estate to work with.”

Embedded memory
Yet another challenge for any design is on-chip memory. Memories, perhaps more than any other circuit, are extensively optimized and characterized. A bit cell for any type of memory undergoes exhaustive study to ensure that it operates properly throughout the lifetime of the chip. Such optimization includes ensuring that the bit cells (and all of the supporting circuits like the sense amps) can tolerate the process and environmental variation that the chip will see.

That development and verification work includes refining the algorithms for writing into the memory. It’s less of an issue for SRAM, but non-volatile memories have significant issues related to the level and timing of voltage pulses to ensure satisfactory programming while guarding against over-programming. And for memories with multi-bit cells, the needed programming precision makes this even more challenging.

So when using a memory in a sub-threshold design, an attempt to create memories that operate in the sub-threshold regime would be an enormous undertaking. And the additional variation sensitivity below threshold would compound the problem.

Ambiq said it worked on creating sub-threshold memories, and found that in general there was little benefit conferred by the custom memories over standard ones. “Memory has extremely low switching activity,” said Hanson. “You might access one word, but most of the words are just sitting there idly leaking while you’re accessing that one word. And so it makes a whole lot of sense for these to run at somewhat higher voltages and with higher threshold voltages.

“We realized that, by re-architecting the way we do memories — even these big ones — we’re not paying a big penalty by using standard off-the-shelf bit cells,” he said. On its older devices, Ambiq uses flash memory for NVM, while on 22nm devices it uses MRAM.

SRAM is also of concern. “SRAM is indeed a big energy user on our chips, and we find that a hybrid approach gives us the best balance between performance and energy efficiency,” said Hanson. “Small memories tend to be entirely implemented in near-threshold, while larger memories use a mix of super-threshold and near-threshold.”

In general, then, sub-threshold designs can provide substantial power savings where speed is less critical. But the design process, especially in the presence of significant process and environmental variation, can be challenging. There are few companies doing this, and those that have succeeded guard their secrets closely.

If sub-threshold design moves more into the mainstream, we may see the kinds of standard flows that designers of super-threshold chips use. Until then, anyone wanting to take up the sub-threshold cause will need to wrestle both with their own design infrastructure and greater sensitivities to variation.

“I do think that, long term, sub-threshold has a lot of potential,” said Hobbs. “But there are a lot of challenges that have to be overcome before it becomes anything close to mainstream.”



Leave a Reply


(Note: This name will be displayed publicly)