Floor-Planning Evolves Into The Chiplet Era

Automatically mitigating thermal issues becomes a top priority in heterogeneous designs.

popularity

3D-ICs and heterogeneous chiplets will require significant changes in physical layout tools, where the placement of chiplets and routing of signals can have a big impact on overall system performance and reliability.

EDA vendors are well aware of the issues and working on solutions. Top on the list of challenges for 3D-ICs is thermal dissipation. Logic typically generates the most heat, and stacking logic chiplets on top of other logic chiplets requires a way to dissipate that heat. In a planar SoC, this is typically handled through a heat sink or the substrate. But in a 3D-IC, the substrate needs to be thinned out to minimize the distance that signals must travel, which reduces the substrate’s ability to transfer heat. In addition, heat can become trapped between chiplets, so a heat sink is no longer an option. The way around this is to carefully configure different layers so that heat is spread out across the chip, or confined to an area where it can be effectively removed, and this needs to be built into the automation tools.

“The transition to a chiplet design paradigm will impact modern place-and-route design flows, requiring an optimization of logical partitions among chiplets,” said Tony Chan Carusone, chief technology officer at Alphawave Semi. “This means the place-and-route design flow for chiplet-based systems must consider multi-die integration, the potential for heterogeneous technologies, and manage the complexity of high-density die-to-die interconnects. This will require awareness of the possibilities and constraints offered by different fabrication and packaging technologies.”

After decades of discussions and PowerPoint presentations involving stacked dies, the chip industry has run out of other options. Chipmakers already are designing logic-on-logic and memory-on-logic, and as the cost of planar scaling continues to increase, system designs that rely on some type of advanced packaging and chiplets are the best option for improving performance, particularly for AI and other high-performance computing applications.

In fact, Yole predicts that most server chips will be built using chiplets, and more than 50% of volume client PC will utilize chiplets, starting in 2025. Those figures add urgency to the need for tools and workflows to adapt.

Floor-planning, placement, clocking, and routing are the four main stages of the place-and-route flow. Floor-planning exploration happens early in the flow when the designer places large functional modules on different areas of the chip, determining connectivity, and what should sit next to what. At this stage, the modules have boundaries that divide the whole chip area into coarse partitions. Standard cells are then placed inside of each of these boundaries as defined modules. These are the small library cells that honor rules in the design review manual from the foundry. Then, they’re routed to each other through interconnects as per the local connectivity. Big picture, the floor-planning step contains an abstract view of top-level connectivity.

“In real placement, you’re actually doing detailed placement of all the standard cells and macros,” said Vinay Patwardhan, product management group director at Cadence. “Routing is the next step of connecting them. At each stage, you have more and more information in the design.”

Basic decisions about materials, such as whether to use copper or optical interconnects, are signed off in the early exploration phase or system design phase, even before floor-planning.

While the moves are still performed in the traditional sequence, the game has shifted from classic to three-dimensional chess. “Life is a little bit more complicated now,” said Kenneth Larsen, senior director of product management for 3D-IC at Synopsys. “When we talk about 2.5/3D, and the transitions to multi-die designs where things are extremely close together, that created a number of new challenges. When we are building these systems that have multiple silicon dies, they get very closely connected. Maybe they stack on top of each other, and they can influence each other. One of the concerns is delivering power to the system. The other is thermal, because of the close proximity. Thermal is becoming a first-order effect, and where you put the pieces into your floorplan could have an impact on the escape of heat or temperature in the design.”

All of this now happens in three dimensions, each of which must now be taken into account in designs. “Instead of just looking at the planar checks, where you’re considering the effect of placement and adjacencies on the same plane, now you have to think about how whatever object you’re placing interacts with the layer on top and the layer on bottom,” said Patwardhan. “Many times in 3D-IC stacked die designs, the lower layer is on top of an advanced package, and it is talking to either HBMs or other storage elements that are next to it, and it is also talking to objects that are on top of it. You’re going in the z dimension, looking at coupling effects from the top die, looking at increased resistivity, and also looking at timing paths that go across dies where there are synchronous clocks. The close communication between the two dies has to be modeled early on in the placement flows, and in planning the inter-die connect flows, as well.”

There’s another important aspect to consider here. “Because these are all metal connections stacked up, there is a chimney effect due to high conductivity between metal layers, so there’s a chance of very high heat dissipation in areas where there is high power density,” Patwardhan said. “You may meet timing or power requirements, but you may not have considered thermal as a first-order effect, and now you must.”

Thermal effects
The growing awareness of the importance of thermal effects, especially thermal crosstalk in 3D structures, has affected how design teams work through this process, breaking down the walls between specialties. “Thermal has always been an issue,” said Larsen. “In the old days, you threw it over to the specialist and he’d come back and say, ‘We have a thermal problem, you need to throttle the chip.’ But now, we have brought in the simulation of these multi-physics effects earlier in the design process, earlier than it was 10 years ago.”

Kai-Yuan (Kevin) Chao, director of research and development at Siemens EDA, agreed. “Thermal planning with physical design is critical, since most high-performance CPUs have turbo and power throttle to manage the hard-limit transistor junction temperature for chip reliability. In short, a fixed-state of the worse-case power-watt thermal simulation with floor plans carries less meaning than simulations of targeted application workloads, running on different cores and memory, in various combinations under that product’s cooling usages in multiple market segments.”

It’s important to reduce the throttle margin between thermal sensors to measure hotspots caused by the most critical workloads. This determines how close to each other different processing elements can be placed, and/or how to partition and prioritize various operations.

“Since voltage/frequency throttling up-down duration has performance and computation throughput penalties, transient thermal-power-ramp modeling and inside-simulation-adjustment temperature-sensitive parameters such as leakage are also required,” Chao noted. “The integrated voltage regulator inductor and trace for package design and cooling design system customers also need early-stage power and thermal maps from chiplet design to coordinate assembly and product launch. Thus, the physical floor plan (including I/O) and consistent power-watt convergence are also important from the pre-RTL architectural stage to the final pre-tapeout layout stage.”

Fig. 1: The interplay of floor-planning and thermal management. Source: Synopsys

Even before a designer gets into the complexities of multi-physics, floor-planning can suggest where there will be potential thermal problems. “Once we get our layout view on the screen and start doing our NoC design, we can see where there are congestion points,” said Andy Nightingale, vice president of product management and marketing at Arteris. “These high densities of connections could be considered hotspots in the design.”

And all of this underscores why EDA companies are encouraging their users to shift left. “If you’re doing signal integrity-aware routing, you have to model early in the flow,” said Patwardhan. “How good your model is will define how good your accuracy is going to be at the end of the design stage. We have to bring some of those extra sign-off checks, or analysis checks like thermal, as well as signal and power integrity analysis, a bit early in the flow. So if we are talking about multi-chiplet placement, at the cell level, whether they are in a 2.5D configuration, or in a stacked die configuration, a lot of those system-level sign-off checks have to be modeled very early in the implementation flow. We have to think of new ways of abstraction, some new ways for the placer environment to handle multiple objects, do optimization for more parameters at a time, and do a good enough job that you don’t have to reopen each design when there is an engineering change order (ECO). Bringing everything in early is not practical from a runtime standpoint or from a design methodology standpoint, but we can do enough early on to make sure the iterations after the first pass are lessened.”

Looking forward with AI
The consensus is that EDA already is a flavor of AI, in the sense that it’s always been an algorithm-based aid to human designers. Still, the tools are evolving. EDA vendors are now considering extensions, such as generative AI co-pilots for tools, and more incorporation of multi-physics simulations, while developing design engines that specifically address working with multi-die and multi-dimensional structures.

The hope is that AI will bring predictive intelligence to traditional place-and-route. “We are already expert at integrating advanced algorithms for various optimizations in NoC designs,” said Nightingale. “The next evolution is to predict and optimize floor-planning and place-and-route outcomes based on historical data, perhaps even as real-time analysis. There’s some close collaboration with our ecosystem partners that needs to happen across domains as well, to do more to keep the design within the constraints given.”

Academics are helping, too. MIT just announced a new AI-based method, called a virtual node graph neural network (VGNN), to speed predictions of a material’s thermal properties using virtual nodes to represent phonons. The paper’s authors claim that running on only a personal computer, a VGNN would have the power to calculate phonon dispersion relations for a few thousand materials in a few seconds.

Conclusion
Today’s chiplet, system, and packaging designers face far more technology varieties and system co-optimization requirements. “There are much bigger and more complicated substrates, including interposer and buried-in-substrate silicon bridges that require EDA routers to handle fast growing wire connections across different hierarchical materials, with specific design rules, and high-speed electrical and thermal mechanical constraints to help productivity,” Siemens’ Chao said. “Further, special routing requirements need EDA innovations such as substrate capacitors and optical components. Fine-pitch hybrid bonding enables single-clock-cycle interconnects to make cell-level timing and I/O placement in vertical cross-die 3D planning. Still, increasing transistors from chiplets packed in a package requires more efficient power delivery and thermal dissipation. For example, TSMC added IVR in its future HPC/AI 3D-IC configuration. And an integrated heat sink solution, including liquid cooling, was co-optimized in NVIDIA’s new products.”

Power and thermal are growing challenges. “In addition to the backside power delivery network that is coming in to address sub-2nm thermal design needs, thermal-aware placement and floor plan requirements, such as the multi-chip module micro-channel cooling co-design may re-appear if integrated in-packaging/system liquid cooling is included in the product design,” Chao continued. “Multi-physics aware early-stage physical design will be very beneficial during the co-development process owned by multiple stakeholders, since infeasible assumptions could be very costly to fix at the late chiplet assembly stage after verification.”

There is still a long way to go before there is an optimized 3D-IC design flow. “We are just starting on that journey now,” Cadence’s Patwardhan said. “We have some pretty good algorithms developed in which we can do concurrent 3D placement, 3D floor planning, thermal-aware 3D floor planning and placement. But everyone in the design community and EDA communities right now is very conservative, using extra margin for stacked die designs because we are at that stage of flow development and early test chips. In a very short amount of time we will be able to produce an optimized flow from our learnings, just like we have moved fast in the era of finFETs and GAA-type of transistors. Now, stacking dies is just one additional challenge with an added dimension. It’s just a matter of time before we can quickly bring up an optimized and fully automated 3D placement and routing flow for complex 3D-IC designs.”

Related Reading
The 3D-IC Multiphysics Challenge Dictates A Shift-Left Strategy
Gleaning useful information well before all the details of an assembly are known.
Intel Vs. Samsung Vs. TSMC
Foundry competition heats up in three dimensions and with novel technologies as planar scaling benefits diminish.



Leave a Reply


(Note: This name will be displayed publicly)