Monolithic Vs. Heterogeneous Integration

New processes, materials, and combinations of existing technologies will determine future directions for semiconductors.

popularity

Experts at the Table: Semiconductor Engineering sat down to discuss two very different paths forward for semiconductors and what’s needed for each, with Jamie Schaeffer, vice president of product management at GlobalFoundries; Dechao Guo, director of advanced logic technology R&D at IBM; Dave Thompson, vice president at Intel; Mustafa Badaroglu, principal engineer at Qualcomm; and Thomas Ponnuswamy, managing director at Lam Research. This discussion was held in front of a live audience at SEMICON West. [Find part 2 here.]


L-R: GlobalFoundries’ Schaeffer; IBM’s Guo; Intel’s Thompson; Qualcomm’s Badaroglu; Lam Research’s Ponnuswamy. Photo credit: Heidi Hoffman/SEMI

SE: Is planar integration on a single substrate running out of steam? Everyone at the leading edge seems to be heading in the direction of stacking dies.

Schaeffer: Yes and no. It boils down to various cost and performance considerations. For the advanced nodes, there’s definitely a trend toward disaggregation of those components. From a cost perspective, the ability to continue to design analog components into those chips that integrate higher voltages can make it cost-prohibitive. Some of those devices will be disaggregated. There also are some performance elements that need to have multiple performance units connected together, which are forcing disaggregation for large language models, and also increasing memory content for storing the weights. At the same time, there are some positive developments in the equipment industry for heterogeneous integration and TSVs, which allow components to be disaggregated. For those reasons, advanced nodes are reaching the end of the line for monolithic integration. On the other hand, for more mainstream applications — 12nm and above — there’s still very much a drive for monolithic integration. In applications where security, latency, and the lowest possible power is important, there’s still a cost advantage for integrating everything into a monolithic die.

Guo: For us there is an opportunity for both of those. We have invested in technologies like backside power delivery, and 3D stacking, putting a nFET on top of a pFET, or a pFET on top of a nFET. But with continued scaling, the processes are becoming more and more challenging. That is driving the interest in chiplet technology. But it depends on whether you are focusing on high performance or low power. The advantage of chiplet technology is that it can use different nodes for different dies — logic, memory, I/Os, RF — in the same package. That provides a huge opportunity. At IBM we are doing both.

Thompson: If you look at where we’re going with gate-all-around structures, that’s apple pie. We’ve been very clear that is what’s coming out at 20A. But if you go beyond 18A to 14A and whatever is next, at some point in the roadmap you will have to get to a stacked RibbonFET, where you have nMOS and pMOS on one another. So you actually start bringing in these layer transfer technologies potentially is you want (110). We lost that ability when you go from a finFET, where you have the side of the structure available, to gate-all-around. So there’s a real benefit to marrying a (110) substrate on a (100) to get the most out of our CMOS. Now, as it relates to packaging, it’s an orthogonal axis. We still have to scale for silicon technologies, and packaging is really going to help us de-bottleneck the von Neumann compute so that we can get more juice out of the packages. At least that’s what Intel has learned during the first 10 months of this journey.

Badaroglu: One of the main reasons we are going to package-focused integration is that a lot of products have different lifecycles. For the mobile market, each year we need to introduce a new mobile product, and most of the time the SKUs on mobile products are defined with different metrics. There are different AI chiplets, different die-to-die links, and things like that. You use the same packaging platform to generate different product KPIs, sometimes in a six-month cadence, but most of the times in a one-year cadence, but that doesn’t match with the periodicity of manufacturing. Also, you need to work with different suppliers, like memory vendors. We have different memory interfaces for DDR and different high-speed constraints. So you have to define all your systems in a single package. This allows us to work with many different vendors at the same time.

Ponnuswamy: If you look back 20 to 25 years ago, people were questioning whether we would go to sub-1µm. Today, we’re talking about sub-2nm. For us, the opportunity and challenges involve ALD and ALE processes, where precision is very important. And on the local interconnect side, the switch is already happening from copper to cobalt, and now we’re talking about molybdenum and possibly ruthenium. All of these pose their own challenges, but we are preparing appropriate solutions. On the monolithic or 3D-IC side, we’ve got hybrid bonding and we’ve got TSVs. We are developing solutions for each of these.

SE: How far can the non-finFET manufacturing processes be extended before they start running into the same kinds of issues as the most advanced nodes?

Schaeffer: We don’t necessarily think about scaling to the next technology node when with our roadmaps. We’re looking at figures of merit for technology nodes. So for SOI we think of it in terms of different BOMs, such as power and performance. How do you scale from 350 to 550 GHz with that technology? We can improve that in the RF dimension. We also can continue to scale that in the digital dimension by scaling to 12nm, and possibly to 10 or 7nm before that transistor runs out of steam. We also think of our other technologies like BCD in terms of voltage scaling. And we look at how to continue to scale up voltage in data centers to be able to deliver the highest voltage possible, and as close to the processor as possible, so we can minimize any power losses.

SE: This brings up an interesting point, because it’s now all about delivering different solutions to different customers.

Badaroglu: I totally agree. For mobile, it’s about cost or delivering a new function. For automotive, it’s about stringent operating environment requirements. These KPIs are defining what the package will look like, in terms of integration, what kinds of new functions you bring to the technology. But usually the SoC fabrics for both are identical, so you always need a die-to-die link. And you always need an AI engine and a CPU, but there is not a big change between those architectures. The differentiator is the technology, the substrate and the package that are used for determining which is for automotive and which is for mobile. The basic KPIs are strongly differentiated by these technologies.

Guo: For SoC logic scaling, there is a huge opportunity for different substrates. Moving from finFETs to nanosheets, because of the orientation of the transport circuits, we lose the intrinsic mobility benefit. At the same time, people are using leading-edge logic technologies for power and performance benefits. From that perspective, the substrate material, which includes silicon germanium, can stress material from the middle of line or even the back of line. I see ruthenium replacing copper, eliminating this barrier layer. That creates new opportunities for the industry. So we look forward to collaborating with the industry for material innovation in different patterning steps. Now coming to monlithic integration, how can we use the advanced substrate packaging to incorporate CPUs, RF chips and I/O devices all together? That’s an opportunity, because now the bandwidth will increase significantly compared to the traditional packaging. So that’s another opportunity. And from TSVs to hybrid copper bonding, and thin dielectrics — especially in the packaging material domain — offer huge opportunities.

Thompson: What keeps me up at night, as we go to these gate-all-around structures and stack-driven FETs, is the substrate really has to be heat-transferring chip. We have to begin looking at engineering thermally conductive dielectrics, because we have a heat evolution challenge as we continue driving up the density of these components. A lot of effort today is focused on how to get the heat out of there, and how do we build that into the packaging strategy. That will influence how we approach the substrate.

SE: We’re seeing a number of new materials being used. Cobalt is already used for interconnects. We’re also seeing ruthenium caps and liners. What sort of challenges does that create for different equipment?

Ponnuswamy: Ruthenium is being considered as a replacement for tungsten. There is no process that is production-ready, obviously, like ALD processes. But it’s not just about production volume. One thing we need to think about here is cost-effectiveness. One thing we’re putting a lot of emphasis on right now is doing a lot of early learning. It’s what we call shift left. We need to do as much as we can, investing in R&D for evaluating these new materials and integration schemes, so when it’s prime time we are ready to go and meet our customers’ requirements. When you start dealing with these new materials, you will run into challenges of voids and thickness distribution across the wafer. There’s a lot of work that needs to be done.

Schaeffer: With disaggregation of compute systems, you start to have more opportunities to introduce these new opportunities for monolithic integration. Silicon photonics is one of those. Our 45 RFSOI technology for silicon photonics allows you to integrate RF CMOS, passive devices, as well as modulators, photodiodes, and laser attach in that single die. You also have other options for power delivery for advanced gallium nitride, ECD (electrochemical deposition), and SOI-based technologies to deliver power to advanced processors. On the other hand, SOI provides an easy path to monolithic integration. You’re seeing it already today in places where you have multiple antennas, and you want to do 3D integration of those antenna switches in order to save space. You’re seeing it in the front-end modules of millimeter wave devices, where previously there were multiple components like PA (power amplifier) elements, LNA (low-noise amplifier) switches, millimeter wave up/down converters. Those can all be integrated into a solid monolithic die using technologies such as SOI, which weren’t available before.

Guo: From a homogeneous integration perspective, there is an opportunity for SOI or advanced substrates. We’ve talked about nanosheets, and the industry is enabling backside power delivery networks. After that, the industry is actively exploring stacked transistor options, with nFETs on top of pFETs. So if you think about backside power delivery networks and stacked transistors, both technologies require wafer bonding. Then you flip the wafer, and you bond the second one. Wafer bonding and thinning are both required for existing technologies. When we look at thinning, SOI or advanced substrates have unique advantages. Of course, cost is still a factor for us to consider. Whether the entire industry adopts it depends on the volume and the ease of transition. If each technology developer has to do bonding and thinning, that’s not straightforward. But if you have a substrate provider to enable those common solutions, that could have a big impact on the cost.



Leave a Reply


(Note: This name will be displayed publicly)