Speeding Up 3D Design

Why the chip industry is plowing ahead with advanced packaging and what can be done to improve it.

popularity

2.5D and 3D designs have garnered a lot of attention recently, but when should these solutions be considered and what are the dangers associated with them? Each new packaging option trades off one set of constraints and problems for a different set, and in some cases the gains may not be worth it. For other applications, they have no choice.

The tooling in place today makes it possible to design and fabricate these complex, packaged devices, but they are far from optimal. New tools, new forms of analysis, and new design concepts all have to be developed before we can truly get the full benefits from 2.5/3D designs.

Moore’s Law is slowing and the technology and cost benefit of technology scaling is diminishing,” says Annapoorna Krishnaswamy, lead applications engineer for the Semiconductor Business Unit at ANSYS. “You are not seeing the same trend where you are able to pack more transistors, like the doubling of every transistor every two years and the cost going down.”

When the cost equation changes, solutions that used to be too expensive start becoming a lot more attractive and that has led to advanced packaging options being considered.

“With SoCs, it makes sense to shrink for digital from 14nm to 12nm to 7nm, but analog does not shrink anymore,” said Andy Heinig, group manager for system integration in Fraunhofer’s Engineering of Adaptive Systems Division. “You need drive strength, and you cannot get that by shrinking. This is the starting point for the chiplet approach based on an organic substrate. It’s very complicated to build a chip at 7nm.”

New applications areas, such as artificial intelligence (AI) also are playing a role. “As these devices become more complex, it is not possible to implement them in a single chip because their content exceeds the reticle size,” says Prasad Subramaniam, vice president for AI Platform Infrastructure at eSilicon. “Even at max reticle size, it becomes impractical to implement them in a single chip due to poor yield. As a result, they must be broken down into two or even four pieces.”

The industry is changing from being driven by one application, mobile phones, into a much more diverse set of platforms. “There is a confluence of connectivity with 5G, plus AI giving you intelligence within the device and also applications like autonomous vehicles,” adds ANSYS’ Krishnaswamy. “If you have to keep going towards diversification of products you have to start thinking of alternatives.”

An increasing number of product categories need more functionality and more performance while controlling things like yield, reliability, and cost.


Figure 1: Evolution of multi-chip advanced packaging. Source: Cadence

Why not 2.5D?
High bandwidth memory (HBM) applications have been the poster child for 2.5D. “These solutions gained quite a bit of power reduction and performance compared to board-level solutions,” notes John Ferguson, director of marketing for DRC applications at Mentor, a Siemens Business. “HBM has some advantages in that there is a lot of regularity in their design structures. It is not quite as easy to say that we will get exactly the same benefits when replacing an SoC.”

It could help with other forms of integration. “2.5D IC is an opportunity to heterogeneously integrate different dies together,” says Krishnaswamy. “The distance that you communicate from one end of the chip to the other is going to reduce and that will drive better power and better form factors, as well as higher performance.”

That modularity may be important. “The key is that they do the design work as individual chips,” says John Park, product management group director for IC packaging and cross-platform solutions at Cadence. “All of the place and route (PnR) is done on a single chip and at the end of the day they are glued together. Then they check things like the IR drop and see that thermal is OK. So, the process is to design them independently and then glue them together and check that they still work as intended.”

But it has some challenges. “The 2.5D approach of implementing them requires a large silicon interposer which needs to be stitched together since it is larger than a reticle size,” says eSilicon’s Subramaniam. “They also require high-speed interfaces for the individual devices to communicate with one another, increasing the area and power consumption of the overall composite device. Whenever a high-speed signal exits one device and enters the other devices through an interposer, there is an increase in power contributed by the interface.”

Why vertical makes sense
Some industries, such as mobile, do continue to put pressure on some aspects of devices. “Form factor is one big consideration,” says Sooyong Kim, senior product manager in 3D-IC Chip Package Systems and Multiphysics at ANSYS. “They want to make things smaller. Mobile was one of the drivers that pushed for 3D-IC stacked dies. We are pushing the envelope even more and the speed that we are looking at is 112GHz for the SerDes. Other speed requirements are also going up and in order to meet that performance target, we need to move things closer.”

In theory, we should be able to cut an SoC into pieces and stack them. “If you simply cut the chip in half and folded it on top of each other, theoretically you will be able to reduce the delay between them by making your connections much shorter — vertically instead of horizontally,” says Mentor’s Ferguson. “So, even from a pure resistance perspective, you gain something. But you do have a lot of other challenges.”

3D offers some definite benefits. “It reduces long interconnects, reduces the need for buffering and enables smaller gate-sizes,” says Greg Yeric, research fellow within research & development at Arm. For instance, a wafer-bonded 3D prototype shows orders of magnitude lower router-to-router delay via 3D mesh topology. This translates to lower point-to-point latency on the system mesh and increased compute density. In our studies at a CPU core level, we have seen performance and power benefits equivalent to a modern process node jump enabled via high-density 3D stacking (~20% higher performance or 40% lower power).”

There are other benefits. “By implementing a 3D stack, the signals from one device can go to the other device through standard logic interfaces that are much simpler and less power hungry,” adds Subramaniam. “Direct bond technology is now available that provides high-density connections between devices with pitches in the range of 1.5µm to 5µm, so it is possible to distribute thousands and even millions of connections between silicon stacks with extremely low power in a very small area.”

The problems with 3D
3D presents the industry with some new problems and concerns that have to be addressed. “When you stack them vertically, thermal is coupled with electrical which is coupled with power,” says Cadence’s Park. “It all becomes intertwined. You want the things that will take more power typically to be on the edge of the device where it has more chance for the heat to escape, but this is hard to do. When you stick two things that are on top of each other that are hot in the middle — the one on the bottom has nowhere for its heat to escape.”

This will limit some of the gains that can be made. “Thermal analysis and mitigation are complicated,” admits Arm’s Yeric. “You have to consider power consumptions of different blocks coupled together with the uncertainty of workload dependence. From the design perspective, we envision that thermal-aware partitioning and block-level placement are important. For example, they will need to avoid placing power-hungry blocks on top of each other. In the long run, thermal aware 3D architectures and micro-architectures would help in keeping the 3D stacking roadmap alive. Relationships between temperature and reliability will also need to be enabled within a 3D design flow.”

These problems are real. “Just take a look at memory, which doesn’t generate a lot of heat,” says Park. “If you put a memory on top of a processor, and that processor heats the memory up to over 90°C, the memory starts to fail. So, even though the memory does not generate the heat, it absorbs the heat from the processor, and you have a created a system that will not work.”

The impact of thermal does not stop there. “When components are stacked together, the thermal gradient will impact the stresses on each die,” adds ANSYS’ Kim. “Depending upon how you place the chiplets or the IP within a die, and also where you place them on top of other components, it not only affects the timing and the power, but will also impact stress due to thermal changes within the structures of the 3D ICs.”

Again, there is a feedback loop. “You cannot really separate thermal and stress,” said Mentor’s Ferguson. “Heating adds to the stresses. If you have stress, it is likely to add to the heating — so they are two halves of the same coin and you have to think about them both independently and together.”

“These effects compound to create reliability issues,” adds Krishnaswamy. “Thermally induced mechanical stress will marginalize the connection between the chip and the package, which means that the chip will only last for so long and for an application like autonomous that is problematic. This is something we didn’t have to worry about too much in the past.”

While some see the TSVs as a way to help get the heat out of the stack, their placement can create issues. “We do a lot in DRC and LVS to make sure that your devices will behave as expected given their current layout,” says Ferguson. “But when you start putting a heavy die on top of another one, or you have a TSV that is near a device, you can impact the stress on the individual transistors and then it may not work the way you expected.”

More concerns
While thermal issues are large, they are not the only problems that are heightened. “TSVs have an inductive impact,” says Ferguson. “The higher the inductance, the less capable you are of going into very high frequency ranges. It also means that ringing and noise are bigger concerns. There has to be a bigger emphasis on how to reduce noise in multi-die configurations.”

Everything becomes closer together and this can lead to greater coupling. “The coupling between the TSVs has to be considered,” says Kim. “Inductance impacts the power delivery network such that at certain frequencies the impedance is increased. The noise levels will complicate things and is now larger because they are closer.”

Some of those concerns spread out across the die. “Electromagnetic crosstalk is becoming an extremely important physics to analyze,” adds Krishnaswamy. “Significant portions of these dies have shared power domains and so you could have electromagnetic coupling between two fairly isolated blocks because of the shared power delivery network. That coupling can compromise your signal transmission or the quality of the signal and cause SI issues.”

Other issues have to get resolved. “The test domain is a little challenging,” says Ferguson. “We have it figured out for the SoC space and we have put certain test structures in place so we know that you have access to them from the tester and you can do diagnostic testing in the chip itself. When you have something with a lot of different pieces that are connected vertically, it becomes a challenge especially when we bring in the concept of chiplets because they may be coming from third parties. How do we know how to communicate from one to the next and make sure they are all working together?”

The state of tools
While some of the problems are new and have to be supported by the tools, none of them are seen as insurmountable problems. But that does not mean that the tooling that exists today is ideal. “There are companies doing this and reference flows for it,” says Park. “There are a lot of companies that are surprised that it is this far advanced, but it is. There is a certain approach that works today.”

Arm’s Yeric describes the current situation. “Current EDA tool capabilities enable two separately optimized designs to be stacked in 3D configuration but do not currently allow any cross-tier optimizations. These capabilities work for first generation 3D products, but the goal is to make EDA tools aware of the 3D solution space. This would enable designers to unleash the full potential of 3D stacking technologies. There is a strong interest from the EDA tool vendors to enable features such as 3D-aware placement and we should see more progress on this front as 3D technology sees more adoption in the industry.”

The EDA vendors are making progress. “There are cutting edge customers that want to concurrently design the chips in the stack,” says Park. “Instead of coming up with a predefined locations for the hybrid bond pads that connect them together, they want to stick them together in the same layout tool and be able to do PnR across the two face-to-face die in the stack — sharing the route resources. For example, if you take a 9-metal layer chip and an 11-metal layer chip and you put them face to face, you now have 20 metal layers to route on. So even when connecting things on the bottom die, you may run out of routing resources and you should be able to use some of the metal layers on the face-to-face die above it. This is what the PhDs of the world are looking into. The same with EDA — this is the future that we are looking at.”

Sometimes these changes can lead to new approaches. “We really need new types of interconnect structures to connect different blocks,” says Andy Heinig, general manager for system integration at Fraunhofer Institute of Integrated Circuits, Division of Engineering and Adaptive Systems. “Current papers look towards the distribution of the whole system across more than one die. But such an approach doesn’t make sense for industrial systems because it is crucial to test and validate the divided logic. In the future, basic blocks such as adders and multipliers could be placed on one die with new types of real 3D interconnects that enable the connecting of the basic blocks in novel ways. Such interconnects aren’t available today and need more research. Also, the tools landscape isn’t prepared for such an approach. For example, high-level partitioning tools must be available to rapidly compare different solutions.”

Conclusion
The tools and techniques available today enable basic approaches to 3D stacking, but they are not yet at a point where all of the potential can be unleashed. Some of the issues are understood, but the solutions to them today involve playing it safe and attempting to avoid the problem. Better analysis tools are the first step, and they are coming online. Design and optimization technologies are further behind, but small modifications to existing tools make it possible to reap a significant portion of the gains. 3D will not replace 2.5D or even 2D, but it does provide a new option for certain designs.



Leave a Reply


(Note: This name will be displayed publicly)