The few designs to reach silicon today are completely customized, with inconsistent tool support. That has to change for this packaging approach to succeed.
Experts at the Table: Semiconductor Engineering sat down to discuss the changes in design tools and methodologies needed for 3D-ICs, with Sooyong Kim, director and product specialist for 3D-IC at Ansys; Kenneth Larsen, product marketing director at Synopsys; Tony Mastroianni, advanced packaging solutions director at Siemens EDA; and Vinay Patwardhan, product management group director at Cadence. What follows are excerpts of that conversation. To view part one of this discussion, click here. Part 3 is here.
SE: A lot of these 3D-IC designs are very customized, in part because there is no standard way of packaging chips together today, and in part because there is a growing demand for chips that are optimized for specific applications. As a result, each one of these designs has its own unique set of challenges, but the tools are expected to provide consistent solutions for everyone. How does that get resolved?
Patwardhan: There’s always room for standardization, but that’s not a very popular option at this stage of the technology. At some point, there will be some industry-defined standard. There is one for test today, which is IEEE 1838. But with 3D-IC, right now we are at the innovation phase. Putting in standards too early can disrupt that innovation because you have to spend too much effort complying with the standards. The multi-physics checks that happen at the end are all interconnected. You can talk about thermal or mechanical stress/warpage, or timing sign-off separately, and you can talk about timing sign off separately. But as heat dissipates, the number of thermal parameters and process corners increases, timing has to consider all the changes caused by thermal effects. And while calculating that, you also have to see the the current effects. They’re all interconnected, and there is no standardization. As everybody is doing their 2D designs today, they also will have to apply some kind of extension to it, or extrapolate from that, to manage a 3D stack. But at some point, we will have to have a standard flow that’s agreed upon by either a committee or a group that that has seen a few chips go through. It can be a mix of customers, foundries, and EDA vendors. Once a few of the test chips and few of these stack designs have been developed to the point where real silicon data is available, then we can start to add some level of standardization into sign-off. Exploration is something we can do on the fly, but sign-off really needs to be clearly defined.
Fig. 1: Tools required for advanced packaging. Source: Cadence
Mastroianni: With 2.5D, there’s a working group called CDX (Chiplet Design Exchange), and one of the things we’ve looked at is trying to standardize on deliverable models for chiplet vendors to support chiplet ecosystems. We do see 2.5D evolving from HBM to other general-purpose chips that can be integrated into an SoC or 3D stack. In order for that to happen, though, the chiplet vendors are going to have to provide a lot of models for their chiplets, including thermal models, simulation models, and test models. The CDX is comprised of EDA vendors, their customers, and chiplet providers, and we’ve come up with a set of standard deliverables that we’d like chiplet vendors to provide with their templates. We will need those.
SE: Is this more than just very detailed characterization?
Mastroianni: Yes, it’s actual models. If you think about IP, you get functional models, physical models, LEF/DEF (library exchange format/design exchange format). With chiplets, you need more than that. You need a kind of superset of IP. So these are the models that you will need for chiplets in a system-in-package design. We need some kind of standardization for what those models are for the chiplet components. That’s the first step. After that, EDA tools need to be able to support all those models. They need to address assembly rules, physical rules, timing, and power, among other things. We recently published a set of those models, just to throw it out there and see what works. Another thing that would help with standardization is a common set of assembly rules. If every fab has a different set of bump pitches, for example, how is that going to work? There is some activity where different silicon vendors are looking to standardize on some of those assembly rules, which will help to make this all plug-and-play. If you get multiple different vendors providing their own IP with different rules, it’s going to be a mess. We need collaboration between EDA vendors and the user community to build the models and standardize the rules. And then eventually, that will help drive the EDA vendors to support those models as part of the overall solution.
Larsen: Standards are definitely good for our customers. But we also need to make sure that we don’t limit innovation and end up with the lowest common denominator instead of something that will enable our customers for the next big jump in productivity and capability. We absolutely support standards, but you have to be careful not to create them too early.
Mastroianni: Yes, I agree it’s too early for 3D. Maybe we can look at templates for 2.5D, which is mature enough to at least agree on some standardization. There could be a set of recommended deliverables rather than formal IEEE standards. But we do need to put some structure in place to proceed.
SE: What else is changing with 3D?
Larsen: When you look at scalability, where we go from a classical package bump pitch with less than 100 I/Os per square millimeter, to 3D packages with 10,000 or maybe even 1 million I/Os per square millimeter, there’s an enormous change in scalability between the flows. Downstream tools need to be able to comprehend the entire package at this scale. PCB tools, for example, are not built for 100 million lines.
Kim: 3D-IC is an evolving trend. Maybe we can standardize everything some day in the future. Right now it’s very early even for functional design, and we will have to come up with a system for all the different technology areas eventually. We need to be able sign off with confidence for all these different technologies, and we will need different modeling techniques to do that. We also will need to encrypt and decrypt data for different IP customers. A starting point for all of this is the foundry, but we will have to jump in early so we can convey the correct message to the foundries from the customers.
SE: Is there room for point tools in the future, or is everything now moving toward integration, particularly with 3D, because of all the possible interactions?
Patwardhan: There is so much data that you’re passing with multiple dies and the package that you need everything in one database. If you’re passing around files, that process is prone to errors. Having everything possible integrated seamlessly at the API level is the ideal solution for a three-dimensional problem.
Larsen: We totally agree, and that’s the approach we’re taking. If you have 10 tiles in different technology nodes, we need to be able to comprehend the entire system. You can dive into the dies for the optimizations that you need, and you can do the co-design. But we want to minimize data movement by unifying these databases. That way you can shift all the way up to make architectural choices, and you can bring in legacy designs and use them effectively. So bringing the existing designs into one database and managing the data in that database is the path we’re taking. It’s serving our customers well, too, because when the design moves, it’s still intact. This ensures integrity because you’re using the same data structures throughout the entire flow, from early prototyping to co-design.
Patwardhan: The next step when you’re bringing so much data together to analyze, is you have to have an infrastructure to massively parallelize it across multiple CPUs, or even in the cloud. So everything needs to be integrated together into a single data structure to begin with. Then you apply massive parallel algorithms to make sure that it can still run in the time frame the customer expects. The integration solves one problem, but it also creates a bigger data set, and you need techniques to address that.
Larsen: Automatic abstraction is one way to manage that data explosion. If you have 10 dies in 2.5D, that would consume a lot of compute resources. You can do that, but typically you want to abstract that down to the lowest level, where you have enough information to do your construction. When you need more details, you can of course, bring it back in. But having an effective abstraction capability is really important.
Related Story
Challenges With Stacking Memory On Logic (Part 1 of above roundtable)
Gaps in tools, more people involved, and increased customization complicate the 3D-IC design process.
Leave a Reply