Experts At The Table: FinFET Questions And Issues

Second of three parts: Development costs; double patterning; ROI; finFETs vs. stacked die; who’s calling the shots; process variability and gate variants; new design rules.

popularity

By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss the current state and future promise of finFETs, and the myriad challenges, with Ruggero Castagnetti, an LSI fellow; Barry Pangrle, senior power methodology engineer at Nvidia; Steve Carlson, group director of marketing at Cadence; and Mary Ann White, director of product marketing at Synopsys. What follows are excerpts of that conversation.

LPHP: How do the economics of developing finFETs compare with other approaches, such as stacked die?
Carlson: The economic question is still the big one for me because of the double patterning. There are more masks. The volume needed to get the ROI is higher, and there are fewer designs. It’s going to be the consumer mobile applications and a few of the enterprise applications driving it. By and large, four or five years from now most of the designs will be at 65nm and differentiating with mixed signal designs. On the 3D stacked die side, the FPGA guys are still grumbling about whether it’s paying off even though they’re committed. The big issue is on the ecosystem side and where the risks are. So if you take an Intel processor and stack a bunch of memory on it, you’re going to have yield loss. How does that get absorbed? Who in the ecosystem will take that hit? If you look at ST, they wrote a paper where they talked about power densities in stacked die. The thermal response time was about 10X different than what they predicted. The thermal issues are not fully addressed. In the ecosystem the OSATs and big foundries will do everything to drop ship this to you, and there are a lot of companies vying for position, but no one is jumping in to take responsibility because all the years of history aren’t there as with package-on-package or other types of integration.
Castagnetti: It will depend a lot on the end product. There are already a lot of products that have gone through stacking, maybe with a 2.5D interposer like the FPGA guys are doing today. We haven’t reached a point where we say a 16nm finFET is a no-go and we have to go to a multi-chip stack or side-by-side solution. That will happen further down the road when we hit fundamental limits. Until that, the semiconductor industry works on the premise that the next node will give them the densities and whatever advantages they need.
Pangrle: The economic side is largely being settled for us. If the foundries decide this is the direction they’re going, that’s probably the path the fabless companies are going to take. At the next node, if finFETs are there, that’s what everyone is going to be designing. That will be the most cost-effective choice.
White: The foundries are the ones that have done all the investment, which is billions of dollars. At least they’re re-using their 20nm lithography equipment. From the user perspective, more masks is a problem. But that’s not new. Every time they change nodes they have to pay more for masks. The challenge then becomes optimization. Can they reduce the number of metal layers they’re using? There are other ways of getting around this.

LPHP: Is there more variability in manufacturing a 3D transistor than a 2D transistor?
Carlson: One of the issues around that is whether you’re trying to dope the channel. That affects the variability and it’s really problematic just because of the geometries and trying to get a number of implants into them. There isn’t enough experience to answer that question, though. We’re too early in the early test-chip phase.
Castagnetti: It’s very difficult to predict where we’ll end up because we’re trading some things for others. For instance, we get rid of the doping in the channel and the rampant fluctuations go away. Now we have smaller geometries, and because the litho is the same it’s going to add more variability.
White: In a different direction.
Castagnetti: Yes. And now we have fin heights, and we don’t know how well that is controlled. What is that going to do to device variability if we have some devices at one height, others at another height across a wafer.
White: At least from what we’ve seen, the number of dopants is going down. That’s a good thing. So at 28nm there are five different threshold dopants. Early indications are that it’s going at least down to three. Theoretically there shouldn’t be any. So at least there’s good news on that front. You can only go back to the high nominal and low Vt. But what we are also seeing is that the number of gate length variants will increase, so the number of cell options will increase in a different direction. At 28nm there were five different gate variants. There will probably be a smaller number at advanced nodes.

LPHP: There are new materials, RC delay with the interconnects and thinner wires, and other density-related issues. How do we deal with that going forward?
White: The foundries will have to come up with a new set of design rules that will say what the acceptable density will be based on that. It will be a different kind of design rule. Experience will show the proper density, but we don’t have that. Users should be given rules, though. They shouldn’t have to come up with those rules.
Carlson: People buy wafers, so the more chips you can get on the wafer has a direct bearing on cost. This is one of the areas where you see the conservativeness and aggressiveness across the foundries in the design rules and for manufacturability.

LPHP: Then are we really gaining by moving to the next node? If you’re losing density, what’s the advantage of moving forward?
Carlson: And when you talk about analog components actually getting larger—there’s a progression over time because of noise issues. If you need to be in a mixed-signal world then it becomes even less clear about what’s the value proposition. That’s where 2.5D and 3D look attractive because you can have an RF process and optical layers.
Castagnetti: Density won’t double, but even 60% improvement helps out for what’s needed today. The explosion of complexity and the amount of functionality that gets put on chips today is increasing. You hit a lithography limit every now and then.
Pangrle: There are certain applications targeted toward high-performance computing. Those are often reticle-limited. But the FPGA companies will take as many gates as they can get.

LPHP: There has been talk about new architectural approaches to computing, such as less than 100% accuracy. Where are we today with that?
Carlson: The core processor guys won’t change, and there isn’t anything more than academic interest in those approaches.
Castagnetti: The tool infrastructure isn’t there, either. It’s the whole ecosystem that moves together.
Pangrle: We’ve seen people speculating for a long time that moving from one node to the next would slow down. It seems to be accelerating a little bit. There are fewer customers with higher volume. That’s what is accelerating. In some sense there has been a slowdown, but the volume at each new node is larger than ever before.
White: Our big consumer providers are moving to smaller nodes very fast. They’re moving to sub-20nm. The whole promise of the planar progression with 2X density and twice the leakage stopped. FinFETs seem to be the promise and where most of the industry is focused.

LPHP: But the next node is a different ball game, because the investment by foundries will be $7 billion to $10 billion in new equipment. And the next one after that is much, much higher.

White: That’s correct.
Pangrle: But you’ll also have 450mm wafers coming into play then, too.

LPHP: But that also affects the design for yield equation if you have to buy the whole wafer.
White: You do have to wonder when small enough is small enough.
Pangrle: It’s never small enough if the economics are there.
Carlson: It goes back to what application you’re talking about. There are a lot of product areas where the design starts are centered around 65nm. For the Internet of Things, there is a whole bunch of innovation around 65nm with 130nm sensors. They’re creating a lot of innovative products that people will touch more than some of this advanced stuff.
Pangrle: There are technologies at the leading edge there that can be applied to some of those older nodes that will help them improve power/performance, as well.
White: This industry has a long history of embracing change. Moving from bipolar to biCMOS was a big change, but most people don’t even remember that today.



Leave a Reply


(Note: This name will be displayed publicly)