Last of three parts: FinFET unknowns; electromigration; parasitics; libraries; SPICE models; corners; thinner wires; characterization; less data because of rising costs; abstraction vs. flat verification.
By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss signoff issues with Rob Aitken, an ARM fellow; Sumbal Rafiq, director of engineering at Applied Micro; Ruben Molina, product marketing director for timing signoff at Cadence; Carey Robertson, director of product marketing for Calibre extraction at Mentor Graphics; and Robert Hoogenstryd, director of marketing for design analysis and signoff tools at Synopsys. What follows are excerpts of that conversation.
LPHP: FinFETs are so new that no one really understands all of the issues yet, such as all the parasitics. All we have so far are test chips. How does that impact signoff?
Aitken: It’s not just the parasitics. It’s also the device itself. It’s a much better conductor than previous devices, so it has a lot more current flowing through it. That means you have to be concerned about local IR drop, local electromigration—you have to really think about all of this up front. Otherwise, you’ll be surprised by your test chips and even more surprised by your production chips.
Hoogenstryd: It starts with good libraries. We’ve already seen some early stuff at 20nm where customers are struggling with correlation to SPICE because some of the library elements being used are weak, and they’re being used in very bizarre situations. In SPICE that shows up as extreme waveform distortions. We’ve seen the slow corner with high Vt and the waveform distortion at 95%. That causes a timing impact on the cells it’s driving. Why is that cell so weak and why is that cell in that situation? On top of that, they didn’t do a very good job of characterizing the library. Doing good library design followed by good characterization will be key to making sure the signoff tools model things accurately and in a way you feel comfortable.
Aitken: You have to pick your corners very carefully.
Hoogenstryd: And you have to tune your characterization to the corners.
Aitken: You have to understand what the corner is, and that’s often a dialog.
Rafiq: What also makes a difference are transitions. They matter more than at past nodes because of thinning metal wires and electromigration. Those have to be picked up by the libraries.
Hoogenstryd: A customer told me that at 20nm the resistance of the wires has gone up exponentially compared with 28nm. So you had weak transistors driving high resistance, and with finFETs you have strong transistors driving high resistance. The big thing people are talking about is signal electromigration, and there’s also talk about frequency dependency based on that.
Aitken: The statistics of resistance are really important. And they are sufficiently nasty that no one wants to know about them, so they have to be abstracted away with some sensible methodology. Doing that creates situations where worst-case resistance is astronomical but average-case resistance is fine. And then how do you design to that?
Rafiq: At 40nm you were spending X amount of dollars to tape out. At 28 you were spending X times 1.5 or 1.8. With the 16nm finFET it’s going much, much higher. So the number of people entering 40nm vs. 28nm vs. 16nm will be very different. My guess is the number of companies that can afford 16nm will be very small. It will be restricted to the very big companies that have the need for speed and power and the ability to fund it. But with fewer companies moving to 16nm there also will be less data.
Molina: I agree, there will be less data. But I don’t see that as the prevailing issue with signoff, even at 16nm. What customers are more concerned with is that they have twice the number of instances in their designs. At those nodes, you’ll see some customers taping out designs in the 200 million to 300 million instance designs.
LPHP: It’s more about pain levels, right?
Molina: Yes. There are technical challenges, for sure. Characterization is the way to achieve signoff more quickly.
Aitken: We looked at the electromigration problem. Nobody cares about it and nobody thinks it’s important, but it really is important. So you have to make sure that the library is designed in such a way that it will handle it by default, and that the characterization is set up in a way so that using a cell won’t cause a problem. Characterization is very important.
Robertson: There isn’t a lot of test-chip data and characterization data. And there aren’t a lot of customers moving to that node, so we’re not going to learn from that. Unless you’re a memory designer, you’re not going to learn from data anytime soon.
Rafiq: Different blocks of the chip are working at different voltage levels, too. So the number of corners will increase significantly. It’s definitely getting more complex.
Aitken: The IP vendors have to know about it, margin for it, and hide it so at the higher levels it goes away as a problem.
Hoogenstryd: And it still has to be competitive on performance.
Aitken: It doesn’t become easier, but certain problems get hidden from you. The IP vendor figures out all of this stuff for you and puts it into a nice box that you can put into your design. Realistically that doesn’t happen completely, but enough of it can be hidden. We went from selling hardened CPU cores in the ’90s, to RTL-based cores in the early 2000s, to RTL-based cores plus an optimized set of libraries plus a methodology that says here’s how you implement it. Does that eventually lead back to hardened cores? Probably not, but it does lead to more of a toolbox than just RTL.
Molina: Abstraction is another compromise we make to get the chip out. We can’t handle the capacity of the design. It’s too large. The compromise is that you build an IP core and time it in isolation, then drop some constraints around it. To ensure that the timing of that block is still valid, you have to make sure there aren’t any in-context effects. You can’t route over it, either, or you have to re-time the block in context. If you don’t want to do it that way, you have margin. And if you want to get the closest, most representative timing view, you have to do everything flat.
LPHP: That’s versus hierarchical?
Molina: Yes.
Robertson: Not too many people are going to move to the next node, so there will be less data. Now the question is, do you move to the next node, or do you stay at the same node and continue to innovate and add complexity. There are companies doing that at 40nm and 65nm. The amount of design and guard-banding they have to do is significant. No one can afford to do it flat, so you do it with abstraction. You stay at the same node and make it more complex. Or—and the third and final option—is that you go 3D-IC. That’s a whole other can of worms with lots of abstractions and unknowns.
Hoogenstryd: There is definitely a split. There are two handfuls of companies charging ahead and doing trials at 16nm because they see the cost/performance benefit because of heir volume. And then you have this big segment of the market that is trying to squeeze more and more out. There are a lot of folks who are sticking at 28nm or 40nm. That’s really where the margin issue is focused. The guys running full speed ahead want to make sure their chips work, and they’re willing to put in more margin to make sure their chips yield. The other group of customers wants to get higher frequency and reduce over-design so they can do an area reduction. How can they do that?
Aitken: We’ve been seeing that, as well, and there’s a design-style sensitivity to this. When you provide IP, you have to understand the design style that people have so what you’re offering fits into that style. In some cases, we’ve been thinking about tailoring it: ‘If you have this design style, you might want to use this flow.’
Rafiq: Why not harden the IP, and then with a through-silicon via and a lower-level technology like 40nm or 65nm build some of your technology over there? Rather than a hard IP core, why not build hard-IP silicon with a through-silicon via. As long as the tools are able to handle it, why not?
LPHP: What you’re proposing is a platform approach, right?
Rafiq: Yes, absolutely.
LPHP: And then you have 3D transistors on top of 3D-IC, which becomes even more challenging.
Rafiq: When you’re selling a product, you’re trying to provide value to the customer for the lowest cost.
LPHP: So will we be more comfortable with the result after signoff in the future or less comfortable?
Hoogenstryd: Every time we get to a new node there is panic. By the time it really goes into production, things seem to work out. People find solutions and ways to get things done. We’re at the beginning of the panic mode. It will be the same.
Aitken: That’s almost true by definition. Somebody, somewhere has to sign off. And they have a certain level of comfort with all the compromises they know about and those they don’t know about. But whatever level of comfort that is, it’s going to remain about the same.
Molina: With timing, it’s going to be about the same if you have an infinite amount of time, which isn’t the case. Time to market isn’t going to change and complexity is going to increase, which will cause people to not sleep as well at night—unless we can come up with solutions to restore that level of confidence.
Robertson: In the short term, there will be less confidence. If you’re going 3D-IC you’ll be less comfortable. If you’re going to 16nm you’ll be less comfortable. With timing it may be the same, but once you add in everything else, including electromigration, in the short term there will be less confidence. Can we get to the same confidence level? In time, maybe, but it certainly won’t be higher confidence.
Leave a Reply