Which Data Works Best For Voltage Droop Simulation

New challenges around when and how to apply it, and what changes in 3D-ICs raises.

popularity

Experts at the Table: Semiconductor Engineering sat down to talk about the need for the right type of data, why this has to be done early in the design flow, and how 3D-IC will affect all of this, with Bill Mullen, distinguished engineer at Ansys; Rajat Chaudhry, product management group director at Cadence; Heidi Barnes, senior applications engineer at Keysight; Venkatesh Santhanagopalan, product manager at Movellus; Joseph Davis, senior director for Calibre interfaces and mPower EM/IR product management at Siemens EDA; and Karthik Srinivasan, director for R&D, EDA Group – Circuit Design & TCAD Solutions, at Synopsys. What follows are excerpts of that conversation. Part one of this discussion can be found here. Part two is here.


L-R: Ansys’ Mullen; Cadence’s Chaudhry; Keysight’s Barnes; Movellus’ Santhanagopalan; Siemens’ Davis; Synopsys’ Srinivasan.

SE: The right kind of data is needed for pre-silicon simulations to predict voltage droop/IR drop in pre-silicon simulation, but do we have that data today?

Mullen: We have the data, but if the user is only analyzing one corner and one mode, they’re really limiting it. So being open to explore more points in the design space earlier is critical to getting confidence the chip is going to work.

Chaudhry: I would second that. We have all the data. How much effort you’re willing to put in is related to how accurate you want the modeling to be. It’s impossible to cover everything, or cover all corners. We have the data, but it’s more about the approximation. Also, as we expand coverage, we will see more violations. As you become good at understanding the worst-case possibilities, you will see more problems. You then need to be able to get come up with ways to fix problems in an automatic way.

Davis: The data is somewhere, but do you have the ability to really simulate it? What we run into in digital is that aspect of coverage. People do still see silicon failures today. They go in, they do debug for a month, they find what vector, what excitation caused it. Once they know that, they can simulate it perfectly. But finding that needle in the haystack is the challenge. So that coverage is the critical aspect for both analog and digital. In analog it’s more well-known, because the analog designers are more intimately familiar with the operation of their architectures and what’s going on in their circuits. But as you get to systems of analog circuits, you then have the same problem of, “Oh, this thing over there is drawing more current than I thought.” Then you’ve got to be able to analyze things at the system level on the analog side. Is the data available? Kind of. But is it practical today to really get there?

Santhanagopalan: There’s a lot of data available. But if we focus on the differentiated data — the type of data you need to make those key insights, and to get to the specific vector or the specific workload that causes the issue — to search through that is pretty hard even now. There has been a lot of push recently to have silicon lifecycle data management approaches to gather data and correlate the data across millions of different parts and different conditions and provide insights based on that. So from that point of view, there is still a lot of opportunity to focus on gathering these differentiated data points, which potentially can be re-used in the design cycle to either mitigate or design better solutions.

Barnes: In terms of data, it’s one of those small nuances most people using a simulator don’t really think about as much as they should. Whether it’s your bulk capacitor on your PCB, your package capacitor, or whether it’s your die capacitance, when you do an EM simulation, say at the die capacitance, you’re coming up with a model for that capacitor. If you try to put it into an EM simulator, it’s a capacitor because of all the plates in the topology, and everything becomes a very intense simulation. Usually, you have a model or a component with ports on it. The problem is determining how much of the plus and minus the ports/the port separation is going to add to the inductance. Every capacitor has resistance and inductance, and what we’re really fighting in power integrity is that inductance because eventually that capacitor can’t deliver charge turns into an inductor. We really need to know what that inductance is so we can know how much capacitance we have to put after that capacitor or put closer to a device or know how effective that capacitor is. The problem in the simulators is that everybody hand waves about how they define that port. How much of the port separation is going into the model of the component, that capacitor, and how much is being included in the EM simulation? And that’s a hand waving thing. If you’re not careful, I think everybody will agree that if you spend time with a simulator, you can figure out how that simulator defines things, and you can work around these things. But if you just grab models off of a vendor data sheet and stuff it into your simulator, I would say a good percentage of the time you’re going to be overcounting the inductance or doing something wrong in terms of how you’re using that model. So, be very conscious of what types of models go with your simulator and know how to use them correctly. This is very important in terms of missing data. A lot of times it’s not really clear how those model ports are being defined.

Srinivasan: If you talk about data, and a monolithic flow with a single EDA vendor, it’s more or less there. This issue is mostly about interoperability. A user might want to have the best-in-class solution for everything. They might have a transistor-level EMIR, an SoC-level EMIR, or they might want to use something altogether different for a 3D-IC. There is no open standard at this point for these tools to talk to each other. Each of them is more or less closed, and they operate well with their own tool sets. This is an area that we need to evolve, not unlike timing where Liberty is the standard. Everybody uses Liberty. The models are evolving, but we’re still not there yet. We are more closed, especially with thermal and package models. All of those need to open up a bit and be standardized, so these best-in-class tools customers would like to use in a mix-and-match fashion should talk to each other — and give them the results, too.

SE: How does all of this affect the chip architecture?

Mullen: We’ve talked a lot about 3D-IC. That’s a big trend in the architecture of dies these days — multiple chiplets working together. We’ve got to make sure that those flows, heterogeneous, analog, digital, multiple vendors, are really robust. That’s going to be a big trend. Another issue here that is a little bit more subtle is that if you’re looking at block-level design, it might not be best to just add metal and add connectivity between blocks because of the local switching effects. You want some isolation. For example, if you connect two blocks together at very low metal layers, you’re going to get interactions between them and it’s going to be harder to close those blocks, whereas if you’re connecting at a higher metal layer, you can get more isolation and it’s a more predictable design flow. The design flows will evolve to adapt to these effects.

Davis: 3D is clearly an area of interest. We talked about backside power delivery earlier. 3D then puts it through the die, so people are facing multiple constraints — thermal, IR drop, performance, density, and cost. All of these things are coming together, and often they are counter-correlated. You might put things close together. That means they’re more tightly coupled, they work faster. But now I’ve got noise, I’ve got higher IR drop, thermal, and so I’m putting power delivery up a stack of eight die. There’s IR drop just up that stack, even if it is a very large TSV. The industry is responding, and 3D is still in its infancy. It’s evolved incredibly fast, and our analytic capabilities are responding because the chip companies are moving faster than we can.

Barnes: Heterogeneous packaging and some of those three-dimensional structures are hot topics, and we’re seeing more and more of those papers. I’m on the technical committee [of a conference] and there are some very exciting papers that I’m seeing. This challenge of three-dimensional EM simulation with the complexity that they’re talking about is just phenomenal. We’re seeing it on a lot of different fronts to get up to 140 GHz for SerDes communications, as well as with the level of integration needed for AI and things like that. So,a lot of really exciting things are coming in the world of packaging.

Srinivasan: For 3D-IC, the architectural decisions need to take IR drop into consideration, but they also need to look at thermal and various other constraints. Analytics and doing tradeoff analysis early in the design is the key here. Just putting a clock in is something that actually looks at the bigger picture. So looking at various aspects of the design in order to make architectural design tradeoffs, like doing the IR-aware placements, for example, or dynamic power shaping and so on — all those are incremental things that impact the overall chip architecture, either in terms of the back end of the line or in terms of the placement of the cells or the entire block macros.

Chaudhry: I would echo what everybody said about 3D-IC. The early prototyping for 3D-IC, and early prototyping for the power grid for 3D-ICs, are where the chip architecture can make the big difference. Again, there’s a macro problem and a micro problem, where macro is the overall system or a higher-level problem of how much current you can pass through the package. The chip architecture decisions are influencing such things as how much variation of current you want from a full chip perspective. The chip architecture decisions make an impact there, but when it comes to the localized effect, it gets more into the actual implementation and place-and-route space.

Santhanagopalan: From the point of view of the chip architecture, we all agree that it’s a system-level problem, and power is one aspect of it. What was mentioned about all the different thermal, performance, and other characteristics also come into play. From that point of view, as far as chip designers having utmost flexibility in each and every step would be really useful so they can optimize their designs. They know the best part of the specific application, what specific scenario they want. Providing flexibility as an IP provider in each of these solutions, which can tackle this closed loop systems and system-level problems, but also have enough programmability and flexibility built in there, would be really useful.

Part one of this discussion can be found here.
Part two is here.



Leave a Reply


(Note: This name will be displayed publicly)