Expert Shootout: Parasitic Extraction

Last of three parts: What happens after 22nm, the impact of 3D IC designs, FinFETs, stress, and thermal issues.

popularity

Low-Power Engineering sat down to discuss parasitic extraction with Robert Hoogenstryd, director of marketing for design analysis and signoff at Synopsys, and Carey Robertson, product marketing director for Calibre Design Solutions at Mentor Graphics. What follows are excerpts of that conversation.

LPE: What changes with at 22nm and beyond with structures like FinFETs?
Robertson: It’s not only parasitics. It’s also BSIM (Berkeley short-channel IGFET Model) or PSP (Penn State Philips) models. They’re going to have to change, as well. We will start seeing DOE’s this year, including version 1.0 of SPICE manuals from foundries. Is there going to be a paradigm shift? I don’t know. But their aspect ratio is different. We have to respond.

LPE: It’s also a lot more data, too, right?
Robertson: Yes, it’s a lot more data, as well as new dielectrics. We don’t have the full picture about what are the dielectric constants, are they going to be conformal, what are they going to look like. And part of it is what rules are going to be mandated. The more we can constrain the problem to say, ‘This is how you’re going to implement contacts around the FinFET region and you have to do it maximally contacted,’ that’s good news for us. It allows us to constrain the problem. If you want to let designers be as creative as they want, so you have one contact here for the source and another contact here for the drain, that’s great for them but difficult for us to deliver the accuracy.
Hoogenstryd: We’re already working with foundries in that area. It’s going to impact the model and what’s important from an extraction point of view. We seem to have the problems at every process generation and bright people always figure out how to get through the noise—even though it takes time—and model what’s important. One of the challenges for new structures like FinFETS is that as EDA vendors we have to make investments ahead of the curve. This is an area that is potentially costly. Customers want us to experiment with them at our expense. Hopefully by the time things get settled, they want us to be there with commercial solutions. But we still don’t have it as bad as the litho vendors. The lead time on getting lithography ready for production is long, and there’s a lot more experimentation at that end. They may take 20 different approaches knowing that one or two will work out, and they want all their suppliers to investigate with them.

LPE: How does 3D stacking affect parasitics?
Hoogenstryd: From a practical point of view you can look at each chip, but instead of between packages you now have things like TSVs (through-silicon vias). You have to model that. There’s a lot of investigation about what’s going on between the TSV layers and the bottom side or top side of the chip that’s being connected. People are not convinced yet that the economics are there for this to be a mainstream packaging solution. There are a lot of companies doing investigation and test chips.
Robertson: There is a modeling issue beyond the traditional metal stack. It can be accounted for. But it’s not just the interaction of the various layers. It’s also what is the model of this TSV. It could be modeled like a MOSFET. It could be modeled as a parasitic. But its geometries are much different than the interconnect around it. There are inductive effects that most customers are not taking into account when they develop standard chips. Unless you’re in analog or mixed signal you don’t really think about the magnetic field. TSVs have that. There’s also a big netlisting and integration issue. If you’re compartmentalizing and building things up and here’s a memory above a core to get the proximity benefit, you can model them differently with some interaction. But when you try to do critical-path analysis and that path spans multiple TSVs, you need an extraction that will span different die and potentially different technologies if you mixed 65nm with 40nm. Where this may go is, if customers are designing L1 and L2 cache among four dies and getting the extraction/simulation done correctly, that is going to have profound impact on the entire infrastructure. That includes extraction, netlisting and simulation. That is probably very cool for the designers.

LPE: Is heat a big problem with that?
Robertson: Thermal is a big deal. The other problem on the device extraction side is stress. How big of a contribution is silicon stress? And as we get to TSVs, stress is an even bigger problem. Stress impacts both leakage and power consumption. Sometimes it’s beneficial and sometimes it’s not. So both stress and thermal are going to require more of a focus.

LPE: So looking out ahead what are the big problems that were likely to encounter at 22 nm and beyond?
Hoogenstryd: We’re already in the midst of 32/28 nm modeling. We’ve been working on that for quite a while. I don’t think the work is quite done there because this node is not in full production. There is constant work going on at the silicon foundries. They find new things they need to model to get a more accurate silicon representation. We’re already working with some foundries at 22nm, where the FinFET is going to be first introduced. It’s that constant challenge of keeping up with the modeling. The 3D IC has some implications on the modeling. Another challenge is just helping customers be more efficient. Right now the pressure on the design team of doing a bigger chip with the same resources is a problem. The answer is not throwing more CPUs at a problem. Capacity is in many ways a bigger issue than CPU runtime. These chips are bigger and you have to run your software in the same memory footprint constraints. And it’s working more with customers to focus beyond this one tool.
Robertson: What customers have been saying is they want more accuracy. Even though design is getting more complicated they can’t deal with capacitance and coupling capacitance. They want more accuracy even with all those challenges. Both Synopsys and Mentor have initiatives to change the way we do extraction and bring in field-solver technology to have the best accuracy possible. That’s a first step, but it becomes harder. If you can deliver accurate R’s and C’s, you have to put that into a system. But that system becomes more and more unruly. It’s not just one effect. It’s capturing the variation. It’s capturing what’s happening in the transistor. And then it’s providing that information. We can provide gigabytes of data, but the issue is whether you can provide the right amount of information for the analog designer to do impedance matching and rotate the device and get highly accurate SPICE-like simulation. And can we also provide the right level of information and abstraction for the digital engineer or the rail analysis person with high accuracy?



Leave a Reply


(Note: This name will be displayed publicly)