While it’s possible to create interposer-based systems today, the tools and methodologies are incomplete, and there is a mismatch with organizations.
Gaps in EDA tool chains for 2.5D designs are limiting the adoption of this advanced packaging approach, which so far has been largely confined to high-performance computing. But as the rest of the chip industry begins migrating toward advanced packaging and chiplets, the EDA industry is starting to change direction.
There are learning periods with all new technologies, and 2.5D advanced packaging is certainly one of them. While the potential for this packaging approach is clear — more features than can fit on a reticle-sized SoC, lower power, and higher performance — the EDA industry has approached this market rather cautiously. Until recently, it was unclear which of a flurry of packaging options would gain sufficient traction to warrant the investment. That has since changed. Financial markets are beginning to factor in a much larger than expected adoption of high-bandwidth memory (HBM), which is almost entirely due to 2.5D, and this was only the first proof of concept.
There is much optimization and automation work to be done to enable widespread adoption of 2.5D, and some unanswered questions about which of several possible solution spaces will win. Nevertheless, as standards begin to roll out, and the industry marches forward on this packaging approach, tools will need to respond to a series of challenges more efficiently and elegantly than is possible today.
Interfaces
One of the biggest challenges, and opportunities, stems from the fact that 2.5D integration creates a connection type that has not existed before. While previous designs share the same on-die connections, 2.5D uses an intermediary to make the connection. They are very much like a PCB in that respect, but the connection densities more closely resemble the most advanced planar chips.
“When you start building separate chiplets and plug in a PHY for UCIe, you’re looking at the classic problem of signal integrity,” says John Park, product management group director in the Custom IC & PCB Group at Cadence. “Do I meet my compliance when I hook this UCIe interface on this chiplet to this other UCIe interface through an interposer, or through a bridge? How much jitter is there? Is my eye closing down because there’s too much noise on the line? There is convergence between what was done historically on the die side, and what we’ve historically done on the system side. Signal integrity has been done on the system side for over 30 years and we have advanced three-dimensional electromagnetic field solvers that allow you to model that. For a digital die designer, that concept may be a little new.”
Today, you find an IC designer using a board-like tool, but over time it will look more like a chip-level problem. “Communication today is still very PCB-like, in the sense that it’s coarse-grained,” says Marc Swinnen, director of product marketing at Ansys. “There is a move in the industry to go finer and finer grain, and we see chiplet connections going from C4 bumps to micro-bumps to hybrid bonds, where the density of interconnect goes higher. With finer grain and 3D architectures, you can think about a functional block talking to other blocks. In principle, it could go even further, but that is too difficult to design and floor plan with existing tools.”
There is a learning cycle, and that is directing both interposer materials and design, as well as the associated communication standards. “UCIe has two versions, advanced package and standard package,” says Ramin Farjadrad, CEO and co-founder of Eliyan. “For advanced package, the wire distance is 2 millimeters, but for standard package the distance is 20 to 25 millimeters. If you want to get the highest bandwidth, it is much more difficult using a standard package compared to an advanced package. You can achieve 32 Gigs in an advanced package using a basic SerDes. There are no worries about crosstalk or return loss of the channel. You have so much wire density that you can place a high-speed wire inside wire shields and you don’t need any vias. With standard packages you need vias, and these create crosstalk and reflections.”
That may sound like everything is in favor of advanced packaging, but it is not that simple. “The wire density may be about 5 or 6 times less than an advanced substrate, but that means your wire face thickness can be five or six times more,” adds Farjadrad. “That results in 30 times less resistance for the same wire, meaning you can travel longer distances. It is a balanced tradeoff between higher speed and lower resistance.”
UCIe for advanced packages relies on it being very short reach. “Because of that, you don’t have to use a lot of the advanced equalization techniques that you would put in a long reach SerDes,” says Tony Mastroianni, advanced packaging solutions director at Siemens Digital Industries Software. “That results in them being much smaller and lower power. They are ideal transmitters and receivers, so you avoid issues with the routing channel in the package adding distortions. You do have to carefully to route those traces, and deal with your spacing and shielding to make sure you’re not losing performance due to non-ideal routing between those die. Most of the PHYs out there are designed to leverage the fact that they are short traces. That creates an issue because you can only put a handful of HBM memories on a die. You really can’t put them too far from a chiplet, because those PHYs are not designed for that.”
Other tools need a major upgrade. “3D systems contain huge power delivery networks in different parts of the system,” says Andy Heinig, head of department for efficient electronics at Fraunhofer IIS/EAS. “There is a grid on the chip, copper pillars or hybrid bonding pads between the dies, and there are also elements outside the system — usually a package substrate. The entire power network is a very complex structure with millions of design elements, which also vary in size. The design elements on the chip are in the range of a few tens of micrometers, while the structures on the package are up to a few millimeters in size. Such multi-level problems are often difficult for 3D solvers to solve, but it is necessary to simulate the whole power grid for verification.”
The power problem looks a lot more like an IC tool than a PCB tool. “Power typically comes from your bottom die and is delivered up, so you have to manage that, although the tools can help,” says Siemens’ Mastroianni. “With 3D, you are going to populate the whole thing with hybrid bonds, and there’s going to be millions, tens of millions of them. You will need a uniform array and rather than planning your power and ground network up front, which is the way you do traditional chips, you figure out the mesh of power and you’re just going to have a full array across the entire chip. The place-and-route tools will assign which of those bumps will be used for power.”
Variation
On-chip variation (OCV) has become a growing problem, but it takes on another dimension as systems migrate to 2.5D and 3D. “Timing closure and OCV become huge challenges,” says Mastroianni. “You don’t have a single wafer, and so your process variations are going to be much more extreme. If chiplets are fabricated using a different process, there is no correlation. For a single die, you are relying on correlation within that die. You lose that once you go with different technologies and different vendors and different wafers.”
It’s not just process variation that needs to be taken up a notch. “Temperature variations can translate into major variations in behavior beyond the garden-variety min/max temperature corners for static timing analysis,” says Ansys’ Swinnen. “Mechanical stress has a significant impact on the electrical parameters of semiconductor devices. Indeed, many process technologies deliberately build in some mechanical strain when fabricating transistors to influence their characteristics. Solutions are still being developed that can translate mechanical results to electrical consequences. Some people also are looking at photonics being integrated into the package, but photonics circuits are notoriously sensitive to temperature. Even slight changes can lead to parametric failure.”
Corners can build on each other. “To close on timing, you have to consider multiple corners — process corners, power and thermal corners,” says Cadence’s Park. “Now you start stacking these things up and the number of corners grow. How do you solve it? We have some technology that does corner reduction. When we go to 3D stacks and hybrid bonding, people will want similar processes, similar nodes, similar timing profiles to make things manageable.”
Margins have been used in the past to deal with some of the variation. “You would margin yourself to death if you tried to account for all those process variations and performance,” says Mastroianni. “That’s why you need die-to-die interfaces. That essentially allows you to do high-speed synchronization. It decouples that variation and allows those very synchronized high-speed interfaces between the two dies.”
Tools development
The EDA industry is hard at work solving these and other issues. Within the industry, there are package-focused tools that are trying to solve all the issues,” says Kent Stahn, senior manager of hardware engineering in Synopsys‘ Solutions Group. “At the same time, there are tools that come from the silicon side and that are evolving to address the future, which is RDL fan-out packages and things like that. The tools are coming along from a layout point of view. Then there’s the whole analysis part of it, where we are seeing better integration of analysis tools with the layout tools.”
There is still more work to be done, however. “The vast majority of tools today are extensions of the package design tools,” says Park. “The vast majority of silicon interposers, meaning 75-plus percent, are done with modified tools used for PCB/laminate packaging for the last few decades. There are modifications for power. You need a different power router, so we added that. However, if I am doing a laminate package, there’s no formal DRC or LVS. They run some CAM tool on it to make sure there are no clearance violations, no acute angles, but it is very informal. We don’t build wafers that way. We have a very formal process for DRC and LVS to make sure the output we create is clean and can be manufactured.”
The sign-off process is very well baked into the chip development methodology. “Why do people have trust in sign-off?” asks Swinnen. “When 3nm is released, nobody has a lot of experience with it. The same is true with 3D interposers. You’re going to apply a solver, which everybody admits hasn’t actually been used that much. You want a solver that has shown in the past it is capable of handling unexpected situations correctly and elegantly. It is a track record of the solver being well behaved, broad enough, and accurate enough to take new things in its stride. It’s one of the reasons people are so conservative, and so hesitant to change sign-off. They want the solver with the best chance of being able to handle this correctly and reliably.”
One big upgrade required is to analyze beyond R and C. “Chip designers forget about the L, which is very important,” says Synopsys’ Stahn. “This is where package designers and chip designers and PCB designers have to merge together. It’s multi-disciplinary. There are tools, whether they’re integrated into the layout tool, or a separate tool that you use to do signal integrity. It can be done. There’s a path for it. If you have a favorite signal integrity tool that is not bolted into your layout tool, you can go that way. But the chip designers need to start thinking about L, and it’s a mindset change for them. For the traditional silicon extraction tools, they have to start thinking about that. As interposers are getting bigger, the lengths are a lot longer, the speeds are getting higher. It’s becoming fundamentally closer to the wavelength, or the wavelength divided by 10, and we have to take that into consideration. Otherwise, we’re going to get signal integrity problems.”
Architects need more help than in the past. “Everyone needs a system planner,” says Park. “It’s not designing one die. It’s three integrated dies. At a higher level, you need a system planner to aggregate those chiplets, optimize how they’re placed, to look at thermals, to look at power delivery, and from that create an optimized 3D floorplan. Then you can send the digital chiplet to one tool, the analog chiplet to another tool, and do the packaging. System-level planning has been the big evolution from a tool perspective, but we are just extending their databases and adding new functionality.”
Perhaps the biggest change of all is organizational. “Historically, package designers would never talk to an architect,” says Mastroianni. “Now that needs to happen. What implementation technology are you going to use? How are you going to deal with thermal? What packaging technology will you use? Silicon interposer or organic interposer? You need to do early analysis because you have an unlimited number of scenarios. How can you decompose that system or sub-system into a bunch of chiplets? You have to worry about stress analysis, at least as a first order at the architectural decomposition level. When you start getting into the physical design process, the package designer then needs to collaborate with the chip designers for I/O planning. You also need to think about how you’re going to test, so the test engineers need to start working with the package engineers to figure out what test strategies are going to be used in the chiplets, and how those get connected in the package.”
Conclusion
EDA companies have made changes to existing tools such that it is possible to implement and verify 2.5D systems. But those tools may not be sufficient to enable 2.5D integration to become mainstream because the tools do not necessarily fit with the structural organization of the design teams. The optimal organization is not yet clear, but ultimately they will need to come together and to collaborate. It is often in the cracks that many faults lay in wait, and as the methodology is presented today, there are lots of cracks and many unknowns, meaning there is plenty of opportunity for disaster.
Related Reading
2.5D Integration: Big Chip Or Small PCB?
The industry is divided about the right materials, methodologies, and tools for interconnecting chiplets, and that can cause problems.
Interconnect Essential To Heterogenous Integration
Chiplet communication will be impossible without interconnect protocols.
Leave a Reply