From DFM To IFM

Integration for manufacturability has placed a bull’s-eye on how IP is used, and misused, in SoCs.

popularity

For the past decade the bridge between design and manufacturing was called, appropriately enough, design for manufacturing. DFM tools, which by nature cross boundaries of what previously were discrete segments in the semiconductor flow, are now critical for complex designs. They allow design teams to check early in the design process whether chips will yield sufficiently and to incorporate rules from the manufacturing side so there is sufficient confidence at physical signoff that a functioning chip can be mass produced.

This collaboration between foundry and EDA has proved lucrative on all sides. From the tools side, it allows EDA companies to sell plenty of must-have software, and from the foundry side it means that chips will yield sufficiently to ensure a steady demand for manufacturing capacity. But starting at 28nm and increasing with each successive node from there, the onslaught of IP—both third-party and internally developed—has created an integration challenge that requires some significant changes and new collaboration from both sides of the flow.

The problem with IP has nothing to do with the quality of the IP itself. It’s almost a given that when IP is purchased it will work. Trusting the IP provider is implicit. The challenge is knowing whether it will work in a complex SoC where it’s impossible for an IP vendor to know all the possible interactions. Integrated IP may behave entirely differently than IP in a lab, and IP that worked fine at one process node or even in other chips at the same node may not work the same way at the next due to a variety of factors ranging from increased power density, as in the case of finFETs, or thermal effects in stacked die.

“This is not just about electrostatic discharge and electromigration,” said Dan Kochpatcharin, deputy director of IP alliances at TSMC. “It’s also the I/O design and high-speed interface. You have to check whether the IP will pass a functional test. We don’t know how the IP is going to be connected, and we need an extra check on how the customer has integrated it.”

To some extent this is like a flexible fabric of rules based upon characterization of the IP. The idea of new rules doesn’t sit well with design teams because any restrictive design rules limit their creativity. But a well-defined set of checks can make sure that the interactions are better understood up front in complex chips.

“This is an initial set of checks,” said Michael Buehler-Garcia, senior director of marketing for Calibre Design Solutions at Mentor Graphics. “We can expand that to power and dynamic analysis. The goal here is to make sure you can build it. We’re finally getting to the stage with third-party IP where there is a set of standards that are applicable. That’s good, because a whole bunch of people are buying third-party IP.”

But the checks also apply to internally developed IP. Being able to use rules and tools to check that internal IP for reuse is a major step forward—even if it’s just to establish waivers for the foundry.

New standards, new tools, and the future
One of the problems with standards is that there are never enough when you need them, and always too many when you don’t. As a result, chipmakers have been pushing standards organizations to create new standards quickly, and collapse them when they’re no longer needed. But when it comes to standards for integrating IP in complex SoCs and in 2.5D and 3D-IC architectures, it’s clear that standards are sorely lacking.

The push by foundries to begin setting up rules for integrating IP is a first step, but being able to make sense of those rules in a multi-foundry and multi-tool environment doesn’t work unless those rules are standardized.

“We need more standards,” said Martin Lund, senior vice president of research and development for Cadence’s SoC Realization Group. “Then we’re not talking about a couple hundred or thousands of tapeouts a year. We’re talking about millions of tapeouts.”

The push for standards is pervasive. Standards are the quickest way to reach mass production, and where they’re lacking progress is slow and painful.

“A lot of the 3D work is hampered by a lack of standards,” said Mike Gianfagna, vice president of marketing at eSilicon. “Today if you conform to the interface and test the IP, there is indecisive proof about how different IP works together. So there are a whole bunch of standards needed to deal with this. It will get better over the next several years, but in the interim it will be the Wild West. You have to test in configuration and prove that it works. And if you have different ramp ups, who takes the yield risk and who does the assembly? Even for IP that’s tried and true and proven, it doesn’t always work.”

But part of the problem is also is the limitations of tools and of the methodologies used by design teams. Anirudh Devgan, corporate vice president and chief technology advisor for silicon signoff and verification for Cadence’s Digital and Signoff group, said that the sheer size and complexity of advanced chips requires an entirely new approach.

“For power signoff, you have to do the whole chip together,” Devgan said. “You need to read all the electrical information. Tools are not meant for big things and usability is not good, so we tend to over design for power. Power used to be an ad hoc analysis, but you can’t have margin in timing. You have to do it all earlier.”

And then, of course, there is the problem of interactions stemming from the integration of more pieces, which leads back to why foundries are teaming up with EDA companies to solve these problems earlier. What’s different here, though, is just how much the beginning point in designs and the end point in manufacturing are now working together. The unaswered question is whether that really will have a big impact on what goes on in the middle.



Leave a Reply


(Note: This name will be displayed publicly)