FinFET Scaling Reaches Thermal Limit

Advancing to the next process nodes will not produce the same performance improvements as in the past.

popularity

In 1974, Robert H. Dennard was working as an IBM researcher. He introduced the idea that MOSFETs would continue to work as voltage-controlled switches in conjunction with shrinking features, providing doping levels, the chip’s geometry, and voltages are scaled along with those size reductions. This became known as Dennard’s Law even though, just like Moore’s Law, it was anything but a law.

There were two problems with his idea. The first is that it totally ignored leakage current. For the first couple of decades, this was not a serious problem because leakage continued to be small relative to switching power. Today, leakage can dominate total power unless significant power management is built into the device so logic not being used is powered down.

The second problem was caused by attempting to control the first. In order to reduce the voltage, a thinner gate dielectric was required, but leakage required the opposite. Advances such as the introduction of high-k dielectric materials helped but nothing enabled the observation’s continuance.

One important aspect of is that the power consumption of a transistor scaled down quadratically, which was necessary because device sizes also scaled by a similar amount. That meant that power density remained constant and thus heating was not a major issue.

All of this started to become a problem in 2005 when threshold voltage started to flatten and power density began to increase. Now the industry has to worry about both leakage to control power, and activity to control thermal. Both of these are tightly coupled in a negative way such that an increase in heat causes an increase in leakage, which creates heat.

dennardchart1
Table 1. Source: “Scaling with Design Constraints: Predicting the Future of Big Chips

Until now, this problem was solved by adding multiple cores so that tasks could be moved around to the cooler areas of a chip, but we may be nearing the point at which this will not work.

Some in the industry are beginning to wake up to the problem. A paper by IBM and the University of Virginia states, “While increasing parallelism rather than frequency is a more power-friendly approach to performance growth, the stricter requirement from chip power still forces processor vendors to keep the chip power at a constant thermal design power (TDP) in order to hold down cooling cost.”

IBM says that so far the problem has been dealt with by increasing the size of the die. But the company questions whether this is “a sustainable approach to realize desired performance growth under power constraints, and when alternative approaches may be necessary.”

The conclusion: “Neither reining in chip area nor the multi-core trend can rein in chip power by themselves when seeking a 2X chip performance growth per technology node. So if ever there was a time for new architectural innovations to increase chip efficiency (performance at constant power), it is now.”

The introduction of the finFET had a significant impact on leakage current, and thus a reduction in total power consumption, but does the finFET get the industry back on track with Dennard’s Law? If power density continues to rise, it means that when devices get smaller, the total power consumed per unit area will increase. At some point, no more activity will be possible.

22nmfinfet
22nm finFet Advantage to 32nm Planar (Source: Intel, Bohr & Mistry)

So far, Intel is the only company with a track record of two generations of finFET devices. With its first generation the company did show an increase in power density over planar, even though power per transistor went down.

dennardchart2

In late 2014, Intel announced its 14nm second-generation process. It managed to attain a 0.7 density shrink and also claimed an equal reduction in power. Much of the density and power reduction improvements came from a redesigned fin structure and a reduction in the number of fins per transistor. Intel also believes that the scaling will continue down to 10nm, but at this point there appears to be no mention of what power reductions can be expected per transistor.

The road ahead
What can be expected from the finFET roadmap? “The newer technology gives me less leakage, faster delay and smaller density,” says Vassilios Gerousis, distinguished engineer at Cadence. “That is the roadmap. The real question is how much of that ideal are you going to get? There is definitely leakage improvement, and people are trying to add new materials or add special processes such as stress to improve performance, as well as using different materials to help with leakage and performance.”

But technology continues to work against those improvements. “The thing that limits the supply voltage is the threshold voltage and as we turn that down, leakage goes up and we end up in a bad place,” says Drew Wingard, chief technology officer at Sonics. “What changes with finFET is that they have better control over the channel and so they are able to achieve a lower threshold voltage with less leakage. There was almost an order of magnitude reduction in leakage for a nominal gate-sized transistor at a given threshold voltage. As we get smaller gates, the leakage starts to creep back up again, so that was a one-time gain. We have moved onto a different set of curves, but the curves still have the same shape. As the rest of Moore’s Law attempts to push forward we get toward 10 and 7nm, and we are right back in the thick of it.”

There are additional factors to start worrying about, too. “When we go from 10 to 7nm there are additional thermal side effects,” says , vice president and senior product strategist for Ansys. “The finFET structure is less efficient in terms of heat dissipation, and it is easier to accumulate heat on the fingers. This is worse than the planar architecture.”

Chang also points to another new factor. “The wires also have a dual heating effect. Heat is induced in the wire and there is thermal coupling between the wires. If many wires are in close proximity with each other and they all carry a significant amount of current, they will impact each other. This thermal problem is getting serious.”

Wingard sees additional problems. “For a circuit of the same complexity implemented in a smaller geometry, the capacitance gets smaller because the distances between things got closer. But that advantage has largely gone away because so much of the capacitance terms are dominated by side-wall components instead of the area component. That means that we no longer have this scaling benefit from shrinking process. We used to get a squared relationship because both the length of the conductor and the width went down, and so the area went down by the square of the linear shrink. Now it is only the length that goes down.”

Design for thermal
The industry has had to deal with design for test, design for manufacturing, design for power and now can add a new one to the list: Design for Thermal. “This may result in a new class of design rule,” says Chang. “Possibly something that says you cannot pack transistors with such high density in certain regions, or your frequency cannot be so high, and the transistors cannot be put into such close proximity with high power running.”

But we cannot directly measure thermal. “First you run the functional, and from that you can calculate power, and from power you can get to thermal and make the judgment if the temperature increase is tolerable,” explains Chang.

The industry is still getting its arms around power analysis. “Companies are struggling with how to take the massive amounts of data being generated at RTL and pumping that into a power estimation tool so that it can do something meaningful with it,” says Krishna Balachandran, product management director for low power solutions at Cadence. “Having physical effects considered, such as the clocks, is important, even at an early stage. You need that to get a good estimate of power.”

Balachandran says there needs to be a convergence in power estimation and thermal. “You can pack more cores into a chip but you will not be able to use all of them because you have a thermal issue. That has to be understood up front and planned for, so that the optimal number of cores can be put on a chip rather than just adding cores in the hopes of getting the necessary throughput or performance, which may not be attainable if you then have to throttle them down. The power and thermal flow will become more connected and more mainstream and more important with the deeper nodes.”

And while detailed analysis for thermal is required, others are looking to make architectural changes, as well. “Every time it gets a little bit harder, and it drives you to do the analysis earlier in the design because the degree to which you tolerate a surprise or the size of the surprise grows,” says Wingard. “It becomes important to be able to model the implications of things and it drives you towards needing dynamic control of the power and power management is an integral part of the architecture phase of development.”

Wingard contends that having software control the power may not be the best approach. “Modern operating systems tend to do a good job at understanding the throughput requirements of the central processor subsystem but are not as good for a radio decoder or an I/O sub-system. The OS leaves it to the device drivers to figure out what to do from a power management perspective. Those drivers often do not have the necessary contextual information to make good choices. We think the intelligence has to be moved down into the hardware where we have an early indication of when something has gone idle, and if we make the power transitions in hardware, we can do that so fast that we don’t even have to tell the software that something has been powered down—as long as we can recover in time and hit the responsiveness goals that are needed by the application.”

Long-term impact
But where there is heat, there may be fire. In this case it is called stress. “Thermal induced stress is a new problem that previously was only analyzed for devices using a TCAD tool,” says Chang. “Now the problem is coupled with design because the thermal temperature is power dependent and is different depending on which functional mode you are operating in.”

Chang explains that when wires get thinner and narrower there is an increase in current density and that means a lot more stress compared to the previous generation of process. “Power, thermal and EM are all coupled and they all affect stress and reliability,” says Chang.

Market Implications
Thermal adds additional difficulty and cost with each new node, and there is likely to be a reduction in the throughput and performance increases that can be attained. “People building high-performance computing elements will clearly go to the new nodes,” says Wingard. “They will get increased performance and density because they can get stuff closer together with lower latency, which has a second-order effect on performance.”

But what about other industries? “The cost argument only works if you get an area savings,” continues Wingard. “What if the function that you are performing is pad-limited? If the pad ring defines your area, a shrink may not save you anything.”

In fact it is likely that the decision may be market-driven. “There are some companies that are daring enough to design at close to the threshold or sub-threshold voltages,” says Balachandran. “I expect to see more of this happening in the lower process nodes, in spite of the challenges with variation. Sub-threshold can get you 50X or 100X savings in power, but you give up a lot of speed. This won’t work for all types of applications, especially high-performance applications, but there is a class of applications where it does make sense, especially in IoT. Here, there is a lot of analog interaction and the processing speed does not have to be very high and you don’t want to change the battery for 10 or 20 years. Under these requirements, sub-threshold technology will play a crucial role.”

Another important market is automotive. “For automotive, if they want to run on finFET, the process vendors will have to tune their process to make it more reliable so that it can survive 10 to 20 years,” points out Chang. “They are working on it and are eager to be qualified for the automotive sector. This is one of the fastest growing areas for chip demand. This will almost certainly affect density.”

Wingard points to another design change that may be necessary. “If I am in a cost-constrained world, maybe I should be looking at putting voltage regulators on-chip so that I can generate enough different supplies to keep control over it rather than having an external power management chip that gives me two or three or four supply voltages.”

Conclusions
It is too early to tell what will happen to power density with finFETs, but the initial data suggests that it will rise and begin to limit the total amount of activity that can happen on a chip. The only way this will change is if the thermal time constant for the chip itself can be changed, such as by adding materials that get the heat out faster. Thermal will become the new limiting factor and thermal analysis is still in its infancy. This means it is likely that early chips may face unexpected challenges, and some may not be able to operate at their specified rates.

In short, we are no longer on the bleeding edge of design. We have moved to the burning edge.

Related Stories
Thermal Damage To Chips Widens
Heat issues resurface at advanced nodes, raising questions about how well semiconductors will perform over time for a variety of applications.
Power-Centric Chip Architectures
New approaches can lower power, but many are harder to design.
Keeping The Whole Package Cool
Thermal issues become more complex in advanced packaging.
Power Management Heats Up
Thermal effects are now a critical part of design, but how to deal with them isn’t always obvious or straightforward.



2 comments

Daniel Payne says:

The comment by Wingard seems odd, “the capacitance gets smaller because the distances between things got closer”. I thought that closer distances between conductors increased the value of capacitances.

Brian Bailey says:

He means that because transistors are closer together, the wires between them are shorter and thus less capacitance. You are also right that the capacitance and coupling between the wires will get worse and that may reduce the benefit slightly, plus add additional thermal coupling issues.

Leave a Reply


(Note: This name will be displayed publicly)