Expert Shootout: Parasitic Extraction

Second of three parts: The effects of multicore designs, lowering the voltage and workarounds; smarter techniques needed.

popularity

Low-Power Engineering sat down to discuss parasitic extraction with Robert Hoogenstryd, director of marketing for design analysis and signoff at Synopsys, and Carey Robertson, product marketing director for Calibre Design Solutions at Mentor Graphics. What follows are excerpts of that conversation.

LPE: Does parasitic extraction get more complex as we move into multicore chips? And if so, why?
Hoogenstryd: Yes, and the challenge is that customers are trying to deploy hierarchical methodologies on chips. The goal is that you’d like to be able to re-use as much as you can in circuits that you duplicate, but you have this challenge where you need to be able to accurately model the effects between the block interactions. For example, coupling capacitance is a big thing in that area. You need be able to model the effects of this block connecting with this block, but you’ve done extraction in isolation because you did it as part of the IP development. The big challenge today is people coming up with effective hierarchical methodologies that don’t double count or don’t miss the data when you put these things together and analyze everything in context.
Robertson: Multicore means more hierarchy, and hierarchy makes sense from an intuitive perspective because you can build things up. But what you’re going to do is create a model for your cores in isolation, and that’s not how they’re going to perform. You can’t do things things flat, so you have to make some tradeoffs. How the circuit performs in isolation isn’t the same as having four or eight of these put together. The noise and the coupling are much different. Getting beyond timing signoff or some other analysis to say you’re not going to have coupling or noise issues is going to require some combination of heuristics, guard-banding and extraction to make some estimation of how these cores are going to operate on a chip. What we really want to do is characterize them by themselves. They may all behave the same, but you need to analyze that, as well.

LPE: How much does dropping the voltage affect all of this?
Robertson: Customers aren’t just dropping the voltage. They’re also really pushing on voltage analysis, IR drop and power rail analysis to identify at this low voltage node what really is Vdd for every one of the devices. And at what time? They may need a very accurate RC network of the power line, which is humongous. Then you have reduction techniques or simulation techniques on top of that to identify what’s the Vdd at what time for every one of these MOSFETs. It’s very difficult.
Hoogenstryd: There is no consistency among customers. We see some customers in design teams pushing the limits where they’re trying to get the voltages as low as possible. They need to do all this analysis to make sure the IR drop doesn’t force this thing to drop to a voltage where a device doesn’t work anymore. And then there are others who want to employ hierarchical design methodology but they don’t want to think about hierarchy. They just want to put things together and have it work. And then there are others who take much more practical approaches. I’ve been with customers doing low-power designs and we talk about IR drop analysis and they say they don’t need it. They just design really good power rails. They guard-band. They over-design because yield is a concern. There are companies today doing multicore design where they guard-band their IP blocks. They shield them. So they try to solve the problem through a design technique rather than trying to rely on after-the-fact analysis to make sure that everything is going to work in an environment that was unpredictable. They use design techniques to try to make things predictable. There are various levels of that. I’ve even seen companies that are very concerned about area using some of these design-for-practicality techniques. They know they might lose a little in silicon area but they get better yield. There is really a lot of variability in how customers are trying to solve this problem. Some use more analysis, more details, more information. Others are just trying to design the problem out.

LPE: How much time is parasitic extraction taking? Is it increasing?
Hoogenstryd: It’s still miniscule compared to the verification. I hear from customers that they want parasitic extraction to run faster. Every year they’re asking for 2x, 3x or 5x increases. For some reason they think we can magically change our code and run it 5x faster even though we’re extracting more data. But they are spending a lot more time on the analysis side. Where they’re really feeling the pressure on the digital side, particularly with extraction driving place-and-route flows, is in the ECO (engineering change order). They want to do an ECO overnight. That means they want to take the place-and-route data, go through timing analysis, ECO optimization, and back into place and route within a day. They’re pushing on every one of the tools in that tool chain to be faster. It’s the same on the analog side. They want to be able to turn around their simulation analysis or other types of analysis very quickly. The natural thing is to push the tool faster. What we’ve been trying to do is change the customer’s mindset to focus on the end goal and how to make the whole flow faster. You need to look at how you’re running your extraction, what you’re using that data for, and how to make that data accurate but more efficient. It’s a multi-prong approach. Extraction is part of the analysis flow.
Robertson: People are budgeting a lot more time for LVS turns or DRC turns because they understand they need to iterate through those until it’s clean, and then you go into downstream extraction and simulation. The amount of time budgeted is definitely more on the verification side. But people aren’t only pushing this for performance. They’re being asked to do more. It’s not just timing closure, and it’s not just timing closure on one corner. Customers want to do 5 or even 25 corners, but they don’t have the time. Getting raw performance isn’t necessarily the answer. It is intelligence around your overall methodology. You won’t get more time. If you get four weeks for verification and two days for extraction, you’re not going to get two weeks for extraction. But if they’re going to fit in these various tasks around corner simulation and rail analysis and signal integrity it is about how we feed those analysis tools appropriately. That’s a real struggle. We need to step back and say, ‘How do we come up with an intelligent corner methodology approach and power analysis, and can we perhaps do one extraction to feed all these other goals?’

LPE: There are two forces at work here. One is that everything is closer together. The other is there’s more real estate, which means you can cram more on a chip. So don’t you have to analyze more simultaneously?
Robertson: Yes, you do have to crunch all of these polygons and the electrical field is more complex and there are more of them and there are more R’s and C’s. And there are more analysis tools downstream, so it is more analysis in less time. But we do find customers, whether they realize it or not, customizing it themselves. Everyone is following foundry rule decks and foundry device models. There are plenty of things the foundry will do. In that extra time, customers are coming up with design flows that are custom to them. If you look at five fabless companies, they have different ways of doing designs to optimize what they think they do well. It may be low power or wireless methodologies. We’re doing more on the verification side to accommodate their design styles with customized verification flows, even though underneath that there’s been a standardization of rule decks, extraction methodologies and device models. They need new design techniques, new verification techniques as well as new extraction flows.

LPE: Are companies getting to the point where they’re looking at good enough instead of checking every corner case?
Robertson: No one has ever said that to me. Inside of companies, I’m sure that’s happening. But what we’re hearing is always more accuracy, more reduction, more speed—either now or for the next node.
Hoogenstryd: Customers are using multiple approaches. One is to push on their suppliers to make the tools faster so they can do a chip that’s four times bigger at the next node in half the time they did it at the previous node. In Japan the mantra is 5x. They’re struggling with a chip that’s 2x bigger. With their budget constraints they have to stick with the computers they have. They can’t buy any more hardware. But this new chip is going to have more functional modes. Therefore they have to do twice as much simulation on a chip that’s twice as large with the same hardware resources. So they ask us to make the tool four to five times faster so they can do it in the same time with the same resources. That’s one prong. The other approach is to come up with practical solutions to get their arms around the problem. Say you have four voltage islands in a design. If you want to capture all the end cases with static timing and each one can turn on or off, or go from a high voltage to a low voltage, to find the critical path across the blocks you have to simulate or analyze 16 different voltage combinations. They come up with methodologies to model around it, coming up with different scenario combinations. They may spend more time up front doing some analysis to figure out what are the critical corners they have to analyze at the chip level or the block level to guarantee you capture the boundary techniques. Each customer has a different threshold of risk, too. Some companies have multiple spins built into their strategy. They know the third spin is the one that goes into high volume.

LPE: Are companies worried about problems they can’t solve?
Hoogenstryd: Yield is the big thing customers are worried about. In my opinion, this push to get everything closer and closer together while being worried about 90% yield seems to be a crazy tug of war. The closer you put things together, the more likely you are to have a yield problem. For customers, density is not the issue. The average chip size hasn’t changed much in the past 20 years. But at 22nm, how many companies can come up with IP to fill that chip other than putting more and more memory on board. The rules in DRC have exploded to compensate for the lithography and CMP effects. There’s a debate about whether we continue pushing things closer together or whether we go to restrictive design rules, which may not be as efficient from a silicon area, but it does simplify the flow and make it more productive.
Robertson: When a customer does a DOE at the leading edge and the silicon is varying 20% or 40% vs. simulation, they usually ask us to help find the error and figure out whether they’re compartmentalizing the problem appropriately. We have all of these techniques to identify how the silicon performs. There is geometric variation that needs to be captured. Is it simulation-based or table-based? Is that the source of the error? Is it silicon stress? There are all of these effects. Some are in the device model. Some are printing of the wiring and the devices around them. And then there’s the actual calculation of the parasitics in the simulation. In the past, 65nm or 90nm, if you had a DOE with poor performance you could find the tall nail. Now when customers ask, it’s a combination of these effects. It’s probably 5 or 6 things that need to be fixed. And that’s at 32nm. At 22nm, with FinFETs, that’s going to be really interesting.



Leave a Reply


(Note: This name will be displayed publicly)