Smarter Co-design With Models

With co-design methodologies on the uptake by semiconductor companies, some are suggesting there is not enough information with the die abstract file to support advanced optimization requirements.

popularity

By Ann Steffora Mutschler
IC, package and PCB co-design methodologies are starting to be adopted by semiconductor companies. However, the existing die abstract file used in these flows to exchange data between the IC designer and the downstream package design team may not contain enough detail to drive advanced planning and optimization with the package and PCB interfaces.

Engineering teams must optimize for routability while considering signal integrity, power delivery and thermal requirements. But does the basic physical pin data in the die abstract files contain enough information about the IC to support these advanced optimization requirements?

According to John Park, methodology architect for IC packaging and pathfinding technologies, in the systems design division at Mentor Graphics, the answer is no. The way so-called co-design flows work today between chip package and board—and even in a non-co-design flow— there is a very simple file format that goes from the IC design tool to the downstream package tools called the die text file. It has very basic information about the chip itself including things like XY coordinates for pin locations, the signal name that’s associated with that pin at that XY coordinate, and then the extent of the die. But there’s really not a lot else that is contained in that file, he explained, so it’s a very high-level abstraction of the I/O plan on the chip. Designers would take that die text file and try to figure out what’s going on with the chip, design a package and eventually a board all with this very simplistic abstraction of the chip.

He asserted that a much more robust model is needed for the downstream package design team, so rather than just having the basics, there would be more details without giving away the IP but giving engineers more details about the chip. This needs to happen in a co-design flow because often, the eventual I/O assignment on the die is actually driven from the board.

“If we look at a high-speed memory device on the board, and let’s say I am someone designing a new chip, then 99% of the time I’m going to need access to that high-speed memory. But that pin assignment is locked down. What people do is they actually lock the pin assignment on the board and drive that all the way back up into the chip, basically creating a board-aware ASIC that gets designed because it knows about the pin assignments all the way up to the board. That includes the package itself. If you have this incompatibility information when you do these board-up-driven co-design flows, then you can make an intelligent decision about how the changes you make will impact the chip floorplan,” Park said.

To address these issues, Mentor proposes the concept of a virtual die model that would have that information directly available inside of the file itself. This way the engineering team would not have to try and find a model in order to build up the association between the buffer and the pad name, for instance. It would also allow package designers to visualize the chip better.

“It’s really important when you do these co-design flows that you have as much information about the chip as possible so that you can make intelligent decisions downstream when you’re optimizing the package and eventually the PCB,” Park reminded.

This is all well and good to Aveek Sarkar, vice president of product engineering and customer support at Apache Design, who said the virtual die model is very aligned with Apache’s vision. “We already have something like this for the last six or seven years, so it’s great to see their interest.”

Apache’s approach to co-design includes a number of technologies, one of which is power modeling. Power modeling needs to capture the chip layout information, the switching current, and needs to include all the parasitics. Sarkar noted that Apache introduced its Chip Power Model (CPM) in 2006, which reads in the design information that can be in any level of abstraction. Once that design database is read into the company’s dynamic power noise platform, the information from the entire design is there—power grid network, all the parasitics, how the various parts of the design are switching or not switching, the capacitance information from all the different sources. All of this is considered to create a simple compact SPICE net list representation that mimics the behavior of the chip, he said.

“One thing that is very critical for the use of this abstract model is what we call ‘self consistency.’ Self-consistency means that that model needs to accurately reflect the behavior of the chip or the design database. So the CPM’s key technology is to capture the behavior of the chip from DC  all the way to multi-gigahertz range 4 GHz or so.”

Self-consistency also means that when a package model is connected to a chip simulation, there will be power/ground noise in the form of Ldi/dt noise. “If you replace the chip data with this CPM model, the results that we get—the voltage and the current—should be very similar. Otherwise, that leads to the model and the chip not being consistent with each other. Our goal is to meet that within a certain percentage value,” Sarkar noted.

Apache’s view is that all models must be application-specific, because if the design team is doing signal integrity, there are certain aspects of the design that must be considered. “For this we need a model that captures accurately the behavior of the I/O circuit,” he said. “It captures all the parasitic information of the I/O layout: the power grid routing R, the resistance inductance (L), and the capacitance and also the effect of the transistors. To model the effect of the I/O buffers accurately, the model has to capture the behavior of the circuit in the presence of supply voltage noise. If you have, say, voltage noise of 200mV, the signal gets slower and it gets degraded and that model should accurately capture this non-linear behavior. At the same time this model should be compact, enabling full I/O bank capacity for I/O DDR timing analysis. This I/O model can be plugged in with the package and the board to perform simultaneous signal integrity and power integrity analysis to predict the timing and jitter on the chip-to-chip communication interface. As the DDR interfaces become faster and operate at lower supply voltages, it is important to simulate an entire an I/O bank using this model in the presence of the package and board parasitics.”

A third type of analysis that must be considered is thermal analysis. For this the model of the chip needs to have different representation and modeling. Here the parasitic effects are not an issue but other factors are, including how the power of the chip changes with temperature, and the density of metal that is present in the chip that can impact the flow of heat inside the chip. This model needs to capture the activity of the chip, the power signature of the chip for various conditions and the dependence of this power on temperature more specifically on how the leakage current is changing with temperature. This model interacts with the package and system level thermal analysis tools to do a comprehensive chip-package-system thermal analysis, Sarkar said.

Another chip-package-system analysis that has to be considered is EMI/EMC simulation. From the large amount of electronics in a car (some statistics show 30% to 40% of the price of a car is from its onboard electronics), the EMI radiation generated by one component can impact another. For this a model of the chip that captures the current signature and the parasitics of the core’s domain and the I/O domain is needed to simulate for near- and far-field EMI radiation, he added.

With either a die abstract, die text file or the virtual die model, what is more interesting to consider is the use in the context of a distributed co-design flow where you’re still using traditional tools—traditional IC layout tools, packaging tools, board-level tools, said Kevin Rinebold, product marketing manager for IC packaging, SiP and co-design tools at Cadence. (Rinebold joined Cadence with its acquisition of Sigrity.)

“The technology that Sigrity has brought into Cadence, namely in the area of co-design, really enables us to take more of a unified approach to co-design where we can literally bring in together the details of the chip layout, the details of the package layout and even the board layout into a unified environment,” said Rinebold. “It provides new options. Maybe you don’t need to have as much detail in a die abstract file or virtual die model if you’re using the native chip information, where you have the details about the power domain it’s connected to or the voltages. When you look at things like the I/O pad ring, looking at the voltage domains that are part of it, all of a sudden you’re consuming the data in a native format. You don’t necessarily need all of that extra information in a third party format like a virtual die model or the die abstract file.”

And, in the context of physical planning, routability, feasibility, a lot of the existing formats are sufficient today to do that, he stressed.

The customer is always right
So who’s right? It depends upon the context. In this case, being right is relative to a lot of very complex factors and interactions.

“It comes back to, when you’re talking about co-design, are you talking about co-design in the context of distributing across traditional toolsets where you have a clearly defined packaging tool, a chip-level tool and a board-level tool or are you talking about co-design in the context of more than a unified flow where you bringing in more detailed representations right from the beginning? Maybe I don’t need to use some new industry-standard format to get the die data because I already have the die data. It really comes down to what the customer’s co-design flow is and what the requirements are,” Rinebold concluded.