Development Flows For Chiplets

A chiplet economy requires standards, organization, and tools — and that’s a problem.

popularity

Chiplets offer a huge leap in semiconductor functionality and productivity, just like soft IP did 40 years ago, but a lot has to come together before that becomes reality. It takes an ecosystem, which is currently very rudimentary.

Today, many companies have hit the reticle limit and are forced to move to multi-die solutions, but that does not create a plug-and-play chiplet market. These early systems do not need to adhere to standards to make them work, and they do not seek the same benefits. From a design perspective, they are still constructing one large system.

“The idea behind chiplets is that you divide and conquer,” says Vidya Neerkundar, product manager for DFT flows, Tessent silicon test solutions at Siemens EDA. “You are able to do it at a faster rate plus get all the benefits of better yield. But then, when you divide and conquer, you have these other things that you have to think about. You solve one issue, then you have to solve something else. You keep moving the problem and playing catch-up.”

A full understanding of those new problems is still developing. “We know how to make standard chiplets, says Mark Kuemerle, vice president of technology and CTO of custom solutions at Marvell. “It’s called HBM, and it’s the only one there is. It was defined in JEDEC. The standard says, ‘Here are the x,y dimensions. Here is how you hook it up. Anybody can build a thing that talks to it.’ To make an open chiplet marketplace work you must have the same level of rigor. It doesn’t seem like an earth-shattering concept, but it really is an earth-shattering concept. If we could do this, it could make sharing possible, and when we apply that same concept to 3D, wow! If we could standardize the footprint for a SerDes IP that might go on a stack, or maybe a data converter for wireless or aerospace — and if we had enough people that were interested in aligning them — we could standardize that footprint. So as a designer, when I build that base die that connects everything together, I could just lock that down and build all the rest of my stuff around it. That could really help democratize 3D integrated design.”

The key point is that enough people have to be aligned. “The big question is, ‘What are the exact needs of the industry?'” says Benjamin Prautsch, group manager for advanced mixed-signal automation in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “A lot of people are waiting for each other. Some companies need to step up and mediate between the different interests and try to identify this common ground. A big part of the answer is to work out, or to identify just the right way to go within the ecosystem.”

That may take longer than some hope for. “The standards are still evolving,” says Mayank Bhatnagar, product marketing director of SSG at Cadence. “Standards such as UCIe are seeing industry-wide adoption, and I believe it will take off, but we are still a few years away from that. I don’t expect it to happen over the next three to five years. It will probably be the 2030s when we will start seeing the availability of industry-standard chiplets.”

Standards required
Standards are required for packaging, test, design, functional communication, implementation-level interconnect, and more. At the moment, everyone has their own standard. “It’s a bit of a wild west right now,” says Marc Swinnen, product marketing director at Ansys. “This is good. Let a thousand flowers bloom. But what packaging technology should I use? There are so many different options. Every OSAT has their own flavors, and then variants of those flavors, and they are not all going to have mainstream success. There is going to be a shakeout in that market at some point. Nobody wants to bet on the wrong horse and be stuck with some oddball technology nobody else uses. There needs to be industry consolidation.”

Packaging is catching up to the formality of the semiconductor industry. “For interposers, the rules and technology parameters are defined differently by top tier fabs and by the OSATs,” says Abhijeet Chakraborty, vice president of engineering at Synopsys. “These are necessary for assembly of these dies using interposers, but today they all have different parameters and standards. For physical verification run sets, they have different methods and paradigms to develop them. All of that hopefully will become more normalized, and that’s all going to help. We are in this climate of tremendous and exciting development changes. There are lots of very interesting and important problems that are being solved across the ecosystem, from the fabs to the architects in the vertically integrated companies, to EDA and standardization, etc. It’s changing very quickly, and though it might seem like a lot, that is necessary before it settles down, and before we arrive at solutions that really scale for 3D-IC development for the masses.”

While each standard may help, there needs to be a critical mass. “There was a ton of excitement when Intel started the UCIe group,” says Marvell’s Kuemerle. “With a die-to-die interface, everyone thought chiplets were really going to take off. But it hasn’t really changed anything. The reason is there’s a lot of other stuff required. There is a lot of complexity that comes from tying these things together, such as test. You’ve got to figure out how to make these chiplets communicate with each other, so that we can get good test coverage on all of them.”

These standards are being worked on. “Back in the ’90s there was IEEE1149.1, which talked about how each chip can be connected to the board,” says Siemens’ Neerkundar. “There was a language — BSDL. Now there is IEEE 1838, which describes the PTAP-/STAP-type mechanism. This describes how that can be used in a 3D-IC, in the stack. You can also use that in 2.5D. Other standards are coming. P3405, which is an IEEE standard, talks about interconnect test and repair. If you’re designing your own, what can you do with it? There is also P1838A, which is talking about the boundary scan interface in terms of 3D-IC.”

The list of necessary standards goes on. “For ESD, we follow the IEC 61000 standard,” says Takeo Tomine, product manager at Ansys. “That defines a machine model, a human body model, and a charge device model. These are standards that every electrical person needs to go through from chip to module to system. On the chip side, they do follow the guidance, and the foundries have come up with a design rule manual to align with those and provide certain limits.”

Standards often avoid certain issues where it is not clear what direction the industry needs to go in. “Standards avoid defining things that can be highly varied,” says Cadence’s Bhatnagar. “For example, UCIe makes no definition about how the channel should be implemented. Intel was the founding member and had its EMIB technology, but the standard avoids requiring the use of any particular technology. It does define things about the channel, such as the voltage transfer function (VTF) and the crosstalk spec. We have seen very esoteric channels being created that meet the requirements, but which look very different than what the standard initially thought.”

Some issues remain. “There is an inability to define the socket,” says Robert Patti, president of NHanced. “We can define powers, grounds, and pitches of the physical interface. We can’t attempt to define voltages. We can define a ring of power in each mini tile, and then we have signals within that tile, the signals between the layers. Getting people into a room to agree on physical requirements for things like power may be possible. It’s the logical protocol that everybody has a different flavor of. If you want me to superimpose some logical protocol between these two sets of circuits, I don’t want to take the time delay. I don’t need to synchronize it. I don’t want to spend the circuitry, and I certainly don’t want to spend the delay or power.”

And this defines the elephant in the room. “The challenge is that the industry wants to have a standard,” says Fraunhofer’s Prautsch. “They want to have it as standardized as possible. But they don’t want to have the overhead.”

And just like for soft IP, there needs to be a set of deliverables that go along with a chiplet that enables it to be integrated successfully. “What models do we need?” asks Pratyush Kamal, director of central engineering solutions at Siemens. “There is a big gap that the industry is trying to address. TSMC has their 3D Blocks language, and they are trying to make it public within IEEE. Similar efforts are underway within OCP, but even there, they haven’t fully defined everything that is needed. Take the example of a 3D IC where you have a mixed signal circuit spanning two dies. When you are delivering this chiplet, with a physical form, you still need to deliver the SPICE netlist associated with this full stack for the full simulation. Most of the time, when you do chiplet integration, you don’t necessarily want to look all the way into the chiplet. We abstract things out. We only care about the interface boundaries, but there would be analysis that would require a full view of the chiplet be exposed to the assembler, to the package designer.”

Organizational challenges
In preparation for a chiplet-based ecosystem, companies have to look at their own organizations and be prepared for it. “Most large companies have projects and programs in place to start coming up to speed on 3D-IC,” says Ansys’ Swinnen. “They need to reorganize. Packaging is in one group, thermal in another, reliability in another, and they have chip design in a separate group. 3D-IC requires all these to work closely together, even at the prototyping stage. Companies are not organizationally set up for that. They need to do some internal rejiggering of team and managerial responsibilities so they can bring together the necessary expertise.”

The flow has to change, as well. “At the floor-planning stage, you have to think about splitting functionality across multiple dies,” says Bhatnagar. “Hierarchical partitioning is changing, because if you don’t do it, you run into issues. Perhaps you can’t take advantage of a certain portion of the design that could go into an older process node, or you end up having a requirement of a very large bandwidth between two dies. These issues could have been avoided by having better floor planning or careful partitioning. The thought process has to be right when you make the hierarchical split. It impacts how much data you have to transfer between dies, and it impacts how hot they get, how close they have to be, and what latencies you are able to tolerate. Only by careful architectural planning can you minimize the impact.”

Test is highly impacted. “You cannot test after you have assembled, because you need to have known good parts before you assemble them,” says Neerkundar. “You need to test them at wafer level. That means you need to have some type of contact mechanism on these dies, even though the pins of these dies, which are stacked on top of the assembly stack, are not coming out as package pins. But at wafer sort, you need to be able to communicate with them. The industry calls them sacrificial pads, where you have the regular C4 bumps, or the standard bump pitches that they use to connect and contact to do wafer sorting. But then those bumps and pitches are higher than the micro-bumps that are used after you assemble in a stack. You need both pathways where you test it using the sacrificial pads and through the standard bumps. Once you assemble them, you have to go through the micro-bumps and retest them.”

The industry itself also has to organize. “For this to take off for a given application, there have to be enough companies that are interested in making it successful,” says Kuemerle. “If eight different companies get together — four users of a particular 3D chiplet and four developers — and spent three years arguing in a standards organization about what the footprint is going to look like, what the power delivery looks, the signal pin out, data rates, all this stuff, then it might happen. They have to inspect it to a really strong level of detail. It happened with memory. It can happen for other applications.”

Tools and flows
Today, heterogeneous integration is only being done by vertically-integrated companies for a good reason. “There is a lot of complexity in this type of design,” says Kuemerle. “When we make a chiplet-based project or a 3D project, we will create a whole verification environment supporting that project. If you own all the inputs to that, then you can ensure you’re going to achieve your goal and that you’re going to have the required functionality between them. There are tools coming down the pike to help with that, but there’s nothing that seamlessly makes that happen today. You have to build customized environments that allow you to do these development projects in parallel. The same applies to physical implementation. We are still checking to make sure we’ve got a good match-up between the dies, because you have to feed everything that’s needed to that top chip through the base chip and intermediate dies. And we’ve got to make sure that we deliver the right connectivity. You can use tools to help with that, but there’s another level of custom checking that you need to implement to ensure that that’s going to be successful.”

When all pieces are designed together, flows can be built. “Multi-chiplet integration requires system-level co-design,” says Rozalia Beica, field CTO for Rapidus Design Solutions. “That requires thermal and power models, and interconnect models. These enable the simultaneous design and integration of chiplets, packages, and substrates, ensuring accurate thermal and power management and reliable communication between chiplets.”

These chips do not require standard flows. “We have a large customer base doing 3D, and it’s all home-cooked,” says NHanced’ Patti. “They use the standard tools, but they are doing these considerations by hand. They will script, they will ad hoc come up with repair redundancy. They will decide how to screen parts so they have known good die. All of that is a manual exercise that uses the EDA tools, but they might as well be 2D tools. We have a lot of rules of thumb based on institutional knowledge. Where the EDA tools have their footing is in these HPC complexes, accelerators, because they’ve all zeroed in on the UCIe interface. There is a sense of standardization, but the customer base is pretty small.”

To get to an open market chiplet economy, those links have to be separated. “When you have multiple chiplets coming from different sources, you have to do system-level analysis,” says Synopsys’ Chakraborty. “That means you need models for analysis associated with those chiplets. They could be chip thermal models, for example. Similarly, you need power consumption models for IR and EMIR analysis. Then you have this broad category of stress and thermo-mechanical stress, which has to be analyzed. You can’t really analyze that at a die level. So how do you do that at the system level, while you’re mixing-and-matching dies and solutions from different vendors? Security is important as well, especially when you’re reusing chiplets and solutions from other vendors. How do you assure yourself of security and integrity for your chips? All of those things are very important and have to come together in a reliable fashion.”

The industry must work out what a chiplet supplier must provide and what they can hide. “We have models where we are able to define IR drop to every single bump without giving away what is under the bump,” says Bhatnagar. “There will always be a concern, as with any IP, of giving away too much information in the model. There will also be a need for the models to be accurate enough. People initially will work within closed ecosystems, where they are trusting their ecosystem partner to do the right thing, using the model for what it is supposed to be used for. As those models mature, they will be detailed enough without giving away the secret sauce. Just like supply and demand, model generation and model consumption will happen in lock-step. That is why I do not think this will become a marketplace in three to five years. It’s not that people don’t have the knowledge to develop the dies. We have fully integrated 3D-IC tools, which can read in all the models and do an analysis. The tools and the model definitions are there, but trust only comes with time.”

Nobody has the complete list of necessary files or models today. “We are currently collating the list of tools and interface file formats, even to be aware of potential challenges when handing over the designs from one partner to another,” says Prautsch. “The key is the interface challenge. We must look from both sides. The package design companies and the chip design companies have to look into each other’s design world.”

Slowly, everything will come together. “You can’t say that the tools need to develop, or the standard need to develop. You need to have them both develop together,” says Neerkundar. “You need to have the standards and the tools that support the standard. Then the industry can look at how you can design chiplets and buy chiplets and assemble them independently from vendor A, vendor B, and vendor C, and then make your own unique ones. We are not there yet.”

Related Reading
Chiplet Tradeoffs And Limitations
Multi-die assemblies offer more flexibility, but figuring out the right amount of customization can have a big impact on power, performance, and cost.



Leave a Reply


(Note: This name will be displayed publicly)