Experts At The Table: The Trouble With Corners

Last of three parts: The value of moving to 22nm; data overload; the challenge of accuracy.

popularity

By Ed Sperling
Low-Power Engineering sat down to discuss corners with PV Srinivas, senior director of engineering at Mentor Graphics; Dipesh Patel, vice president of engineering for physical IP at ARM; Lisa Minwell, director of technical marketing at Virage Logic; and Jim McCanny, CEO of Altos Design Automation. What follows are excerpts of that conversation.

LPE: As we get to 22nm we’re starting to get to things like double patterning. Does that affect corners?
Srinivas: Double patterning is more of a manufacturing variation. It has some effect on the length, which affects the device. There are tools for addressing these kinds of effects. You can do an exact simulation or you can guard-band it.
McCanny: The transistor length is larger because of double patterning. The sensitivity to variation will be reduced. So it actually may help—inadvertently.
Patel: Double patterning poses more of difficult questions about the viability of going to 22/20nm from an economic perspective. The throughput will go down, the cost of wafers will go up. At what point do you go from 28nm to 22nm? You’re expecting a density benefit because it shrinks the size of your function. But it’s going to take you ‘x’ times longer to do this. Is it viable?
Minwell: At least part of this concerns design techniques. If you also have algorithms built in for test, the customers will have the comfort to pick it up on chip when it’s running. They also have the ability, when they test their parts, to tweak things. There’s a whole other aspect to this that can reduce the stress of our customers from an up-front perspective.

LPE: That’s basically model up front and then tweak as you go, right?
Srinivas: This is post-silicon tuning.

LPE: But that’s only part of the picture. You need to have the ability to tweak all along the way, don’t you?
Patel: Yes. There are a number of tests. When the silicon comes back you run the DFT (design for test) and at that time you use these techniques to tune certain parameters. Then when you’re in the field you can do some tuning.

LPE: Like bringing your car in for a software update?
Patel: Yes, it’s a similar approach. You might find the load is getting very heavy, so you turn the frequency down a little bit and still keep the same performance.

LPE: What happens as we get down to 22nm and we can no longer do the same kind of guard-banding we could do at 90nm or even 45nm?
Srinivas: Over-design doesn’t necessarily help you because as you add more area to the chip to make the critical path faster. You need to model some of these effects realistically, not over-pessimistically. So you don’t say every path is 10% faster and slower at the same time. This tends to cancel out the random variation. That mitigates this over-design. You can’t model everything, so there is still some work to be done.
McCanny: AOCV (advanced on-chip variation) is being done outside the corners at the moment. That’s the current state of the technology, but it’s a significant amount of character work. We think it’s worse than statistical, and you end up with a global number per cell. We see a lot of variability, depending on why the circuit is operating. Another issue with corners is mining data. It’s putting a stress on everything. If you look at the corners you create there’s tons of duplicated data. If you have 100 corners there’s an enormous amount of data that’s duplicated.
Patel: We have challenges not only in creating the data but also in validating it and even distributing the data. We have to distribute all this data to our customers through a Web-based channel. Imagine trying to transfer gigabytes of data down a pipe that isn’t particularly thick. There are lots of changes we need to think about in terms of the number of corners that we are giving customers.
Srinivas: EDA tools usually suck up all the libraries and work with it. But with the machine size and 128 gigabytes being the norm, that’s causing a problem. Everyone wants to reduce the memory, but it’s not uncommon to have 10, 12 or 15 gigabytes being taken up by the library. Imagine if you add more corners. EDA companies are putting libraries on a disk so they can be used as needed.
Patel: A while back we couldn’t even compile the DCS corner we had created because the machine we were using was not big enough. That was just trying to compile one corner.
Minwell: On the subject of guard-banding, I am noticing that customers are starting to move away from that. One thing we haven’t discussed here is accuracy. Every time you’re transitioning from the fab, creating a model, the model being used in a simulator with the IP vendor adding their characterized view—at every level you have a certain amount of approximation. By the time you get to the chip level the designers have to deal with layers and layers of approximation. It’s hard to know where you are with accuracy at that point. What I see is more compensation effects in post-silicon and more customers pulling back on the guard-banding.

LPE: Does the problem get worse at older nodes because those nodes are being optimized for low-power?
Srinivas: Yes, even at the older nodes SI is harder. Things like L-gate are becoming global problems. People use a 10% rate to cover their problems, but it’s no longer that easy.
Patel: At 180nm, the strategies are still the same. What has become more complex at 180nm is the low power. We have introduced voltage domains. From a corner point of view it’s not exploding. But if someone is doing a complex microcontroller, they will need additional corners for each voltage domain.
Srinivas: When people are looking at advanced nodes they are looking at more Vt levels. At 45nm we saw three Vt levels—standard Vt, low Vt and high Vt. Now people want more flexibility to manage power, leakage and timing. Now we’re seeing six or seven levels. It gives you more flexibility but it makes it harder.
Minwell: It also increases your library size. The tools are going to break.
McCanny: It does break the tool in terms of capacity. But also, in trying to optimize a problem the number of variables has a major impact on performance. We may be completely overloading the optimization problem. We’re seeing people using a subset of corners for optimization and another subset for signoff. If you make the optimization problem manageable, you pass the problem off for an ECO. But guard-banding is inherent in the whole process. There are guard-bands in the SPICE model. There are guard-bands in the library. If people are trying to be more fine-grained about their voltage level and their Vt, they need to better understand what the guard-bands are. In the past, people would take a library from their supplier and trusted it. Now they’re starting to question it. There are little decisions that get made behind the curtain that can have a big impact on a design.
Minwell: There’s also an increasing fabrication cycle. Now it may take six months. That’s difficult for customers.



Leave a Reply


(Note: This name will be displayed publicly)