Experts At The Table: FinFET Questions And Issues

Last of three parts: Parasitics; power density; dynamic power variability; throttling back performance; improving software efficiency and other architectural approaches.

popularity

By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss the current state and future promise of finFETs, and the myriad challenges, with Ruggero Castagnetti, an LSI fellow; Barry Pangrle, senior power methodology engineer at Nvidia; Steve Carlson, group director of marketing at Cadence; and Mary Ann White, director of product marketing at Synopsys. What follows are excerpts of that conversation.

LPHP: As more fins are added with each new process node, does it become harder to design and manufacture? Are there more physical effects to content with?
Carlson: That’s where all those layout-dependent effects and 3D extraction come in. You need to pick the optimal density so the capacitances and parasitics don’t kill the design. The manufacturability isn’t as big an issue as the parasitics. When you look at how the gates stack up, and the vias stacking up like a staircase—there are fractured vias that get progressively wider—and then you’ve got metal corners sticking up into the fin corners and there are sharp edges. The capacitances grow pretty quickly there. The equation for dynamic power is a direct correlation to the capacitance, and now you have higher performance so you want to clock it. You can mitigate some with power supply, but when the gate capacitance goes up 60% relative to the planar approach, it’s not clear how things will balance out relative to overall power density.
Castagnetti: We’ve been dealing with the variability of leakage. Now you’ve got dynamic power variability.
Pangrle: If you drop Vdd 20% but your capacitance increases 60%, you’ve wiped out all your improvements.

LPHP: Taking cost and time to market out of the equation for a moment, is there a point where we no longer see gains in performance or a reduction in power at new process nodes?
Carlson: We’ve seen performance gains tailing off in processors for the past eight years or so. That’s led to a renaissance in architectural design with things like big.LITTLE and multicore and more specialized hardware accelerators. The industry has resigned itself that no one can go as fast as they want because of the power issue. You have to throttle back so you don’t melt everything.

LPHP: What happens with quantum effects, which are supposed to begin at 10nm?
Carlson: IBM was talking about carbon nanotubes. That’s what they’re banking on.
White: They’ve been talking about that for five years.
Pangrle: And they’ve been talking about finFETs for 14 years.
Castagnetti: With regard to quantum mechanics, it’s no worse or better with finFETs. We’re still dealing with a microamp per micrometer. Part of the dilemma with quantum mechanics has to do with how many electrons you want to control. We’ve heard about that with single-electron memories. It never went anywhere.

LPHP: How far are we from the point where moving to the next node isn’t necessarily the best approach? Maybe it’s software.
Carlson: People are doing that already. When you look at system-level design within EDA companies, the Internet of Things is a good proxy. You need a sensor, a microcontroller, and you need some memory and a radio. How small, how low power can you make that thing? Cisco talks about the Internet of Everything, from light bulbs to refrigerators to cameras, where everything has a way to communicate. You see an incredible need for power efficiency across a wide spectrum of new product types under design. There’s a lot of growing interest in analysis. But the problem is that in order to get reasonable power calculations, you have to presume an implementation path. You don’t get that pathfinding independence you’d like to see at the outset where you determine whether it’s hardware or software, or digital or analog tradeoff. If you rewrite your embedded software and recompile onto this model, what does the power look like. Those are the kinds of things people are asking for now because they’ve gone past a MatLab level. The accuracy isn’t good enough, so you need to go into prototypes. There is opportunity for EDA vendors there. The whole question of how to write energy-efficient software, particularly at the device level, is getting a lot of attention. It will only increase. It’s an architectural issue, where you put more of the architecture into the software stack.
White: You can do some things with software at the system level. But if you look back at what you can do at the device level there are a lot of advanced power techniques which, surprisingly, people still don’t use. We’re talking to consumer electronics manufacturers today that are just now adopting power gating. We’re seeing a lot more adoption of advanced techniques. So while there are things you can do at the system level and the software level, there are things that can be improved everywhere. It’s just a matter of how much people are willing to change their design methodologies and styles to get there. It’s no longer just one button for the best optimization of power and performance.
Carlson: But even if you have all the power features in your device, there is still the question of whether your system software is going to take advantage of it in an intelligent way. When you’re watching a video and talking on a phone, will it consume less energy because you’ve figured out what to turn off when and for how long.
Pangrle: A big part of that is the market the hardware is intended for. We take as a given what’s in hardware has already been decided, not that someone is going to write software on top of it. We’re seeing a lot more knowledge and awareness of people writing things for one device. But there are millions of apps being developed by people who have no way of even knowing how their code impacts power. The system developers need to provide a set of tools that will give the software engineers the visibility to even measure what they’re doing. I assume we’ll see more of that in the future.
Castagnetti: I agree that’s what’s missing. If the software engineer can say you do this code this way and you get this result, then they may consider it.
Pangrle: All you need is to put an app on your phone that wants to use the radio all the time and eliminates any attempts to turn it off and your battery goes dead in 30 minutes.

LPHP: We see people moving to the most advanced nodes, but is the momentum of everyone moving to the next node—whatever that node may be for them—proceeding at the same rate?
Carlson: The high-volume guys are still charging ahead at the same rate.
White: I agree with that.
Pangrle: But no one is expecting to see products out at those nodes for three more years. There has to be some real compression from first tapeouts to continue at this rate. You can argue that 20nm is the same BEOL, but it’s a different device now.
White: These are test chips, too.

LPHP: We have double patterning at 20nm, the possibility of triple or quadruple patterning at 14nm or 10nm. Does that slow things down even further?
White: There are computational factors, defragmenting, alignment issues—every layer of mask brings its own set of complexities.
Carlson: And cost. The value proposition of moving from one node to the next is not as attractive, so a few more people will fall off the trail. For some application, it may not make sense. The question is whether they will ever need to move there. Even as wafer prices drop over time, there is still the design cost—it’s more difficult, there are more design rules, there is tooling infrastructure knowledge. The guy at 65nm isn’t going to jump to 10nm.
Pangrle: It’s taking longer to get the design kits and libraries out. That’s not to say companies can’t do their own libraries.
Castagnetti: As far as who’s going to jump and who isn’t, we’ve seen products at 28nm that had to make decisions. As with every previous node, being able to integrate more is the value proposition. If you have a product where there’s nothing more to integrate, then you’re done. There will definitely be things to consider. For some product lines, 16nm may not make sense. It may make sense to stay at 28nm or do something with 2.5D. But there is still plenty of stuff that will use more transistors.
Pangrle: To get to the newer nodes there is so much engineering cost to get there that it has to be high volume or high margin, and suddenly we’ve had an explosion of high-volume parts. The question is when that gets so big that even if everyone in the world has two of them it no longer makes sense.



Leave a Reply


(Note: This name will be displayed publicly)