Future Challenges For Advanced Packaging

OSATs are wrestling with a slew of issues, including warpage, thermal mismatch, heterogeneous integration, and thinner lines and spaces.

popularity

Michael Kelly, vice president of advanced packaging development and integration at Amkor, sat down with Semiconductor Engineering to talk about advanced packaging and the challenges with the technology. What follows are excerpts of that discussion.

SE: We’re in the midst of a huge semiconductor demand cycle. What’s driving that?

Michael Kelly, vice president of advanced packaging development and integration at AmkorKelly: If you take a step back, our industry has always been cyclical. Then, there are some extraneous factors like Covid-19 and the work-from-home economy. That helped precipitate these chip shortages. In addition, AI is one of those technologies that has become a big driver for our industry. It started small. It was just something the data center required for voice recognition and things like that. Now, AI is in almost everything, even if it’s just an embedded core somewhere in a piece of silicon that helps a standard GPU render a certain shape more effectively with predictive capability. It could be for mundane things, but it’s ubiquitous. The thing about AI is it requires a huge amount of compute resources, both for algorithm training and inference, which has pushed the spectrum on high-performance requirements.

SE: IC packaging isn’t new, but years ago it was largely in the background. A package simply encapsulated and protected a chip. Recently, packaging has become more important. What changed?

Kelly: Packaging has been around for a long time. It always has been this thing that connects the real world, via a circuit board, to the integrated circuit. You need to get the signals from the silicon out to something that people can use to create products. The package was simply mounted on a circuit board. For a long time, there were more developments in semiconductor processing. You had new transistors and architectures. You had new ways of boosting the performance with the same transistor or better transistors. That has been the story for over 50 years. That’s where the key technology was centered. It was inside the chip. Over time, more electrical functionality was built in and around the central processor. Then, it became very complicated. There were different voltage domains and transistor requirements. And then we hit a new juncture. To keep increasing the performance at a reasonable cost, you can’t just keep putting all of that functionality into what is going to be a relatively big chip in a cutting-edge node. The wafers are going to be expensive. You can increase your performance, but the cost is going to go up in a fashion that doesn’t justify the performance gains. So you need to come up with a better economic model to maintain that performance to cost ratio. One way is to pull the high-speed assets, like your processor cores, into the leading-edge nodes and keep the rest of the chips at other nodes. You can get the same performance by combining dies of mixed nodes in a package at the same or a lower cost. That required flexibility is influenced by the business market you’re talking about. For example, I can use a chiplet that I designed in 10 different products and recombine them in different ways at the package level. Then I won’t need to have a full-custom system-on-a-chip (SoC) design for every one of those products. So the package is the little envelope that’s pulling all the pieces together, making these heterogeneous constructions more powerful. As a result, you have a shorter time-to-market than if you do a custom design every time.

SE: What are the other issues?

Kelly: Some companies don’t have enough designers to design a custom SoC for all marketplaces. But if I design chiplets, and then mix-and-match them for various market segments, that’s a better use of my design talent. Packaging is in the mix here. So if you disaggregate an SoC, you need to re-aggregate the IP blocks at the package level to have a fully functional product. That’s pushing the package to do more things. You require fine lines to keep things integrated. You need to manage thermal waste heat or power. You need to deliver power to an increasingly power-hungry device. That’s putting extra demands on the package.

SE: What are the big concerns here?

Kelly: Power dissipation and power usage are big challenges. It’s hitting home in the packaging industry because of the integration at the package level. Unfortunately, silicon generates a lot of wasted heat. It’s not thermally efficient. You need to dump the heat somewhere. We have to participate in the ways that we can, which is between the die and the package edge. We have to make that as thermally efficient as possible for whoever is doing the thermal dissipation in the final product, whether that’s in a phone case or a water cooler in the data center. How much actual electrical current we have to deliver into a high-performance package is also getting interesting. Power is not going down, but voltages are sliding down. To deliver the same total power or more power, our currents are going up. Things like electromigration need to be addressed. We’re probably going to need more voltage conversion and voltage regulation in the package. That way we can bring higher voltages into the package and then separate them into lower voltages. That means we don’t have to drag as much total current into the package. So power is hitting us in two ways. It’s heat, but it’s also managing that power delivery network electrically. That’s forcing more content into the package, while also doing your best on thermal power dissipation.

SE: Any other challenges?

Kelly: We’re starting to see a lot of heterogeneous integration designs. We are just at the tip of it. As we go further into that, the intensity to keep pace with what’s required for the end product is also speeding up. You need to be smart about how the heterogeneous technology is invested in. That way, you can cover as many applications as possible. You also need to stay on or above the technical curve so that you can keep up with and challenge your competitors in this aggressive heterogeneous packaging space.

Fig. 1: Examples of 2.5D packages, high-density fan-out (HDFO), packages with bridges, and chiplets. Source: Amkor

Fig. 1: Examples of 2.5D packages, high-density fan-out (HDFO), packages with bridges, and chiplets. Source: Amkor

SE: Fan-out packaging is gaining steam. In one example of fan-out, a DRAM die is stacked on a processor in a package. What is fan-out packaging and what does it promise?

Kelly: When you’re talking about fan-out, it helps to divide it into two parts. There’s low-density fan-out. Then, there’s high-density fan-out, which is a more modern innovation for integrating multiple dies or heterogenous integration. Low-density fan-out has been around for quite some time. It has good electrical properties. It tends to have low layer counts. The package also can be very thin. Low-density fan-out is a good fit for many products, especially mobile. Then there is what I call high-density fan-out. This incorporates the same copper and dielectrics, but we’re imaging them down to finer geometries in terms of lines and spaces. It has multiple layers with tiny vias. High-density fan-out has become a contender for how to integrate small chiplets into bigger modules in this whole heterogeneous universe.

SE: Fan-out and other packages have redistribution layers (RDLs), which are the tiny metal traces that electrically connect one die to another part of the package. What are the line and space dimensions for the RDLs?

Kelly: If you’re talking about high-density fan-out, 2μm line and 2μm space is the sweet spot today. The foundries and OSATs can achieve 2μm-2μm. Once you go below 2μm-2μm or 1.5μm-1.5μm, you are looking at slightly different ways of making the traces. But it’s largely the same dielectrics and copper. A number of companies are working on sub-1μm line/space. Those geometries will be future paths. It comes down to what does a product need. For the next of couple of years, 2μm-2μm is going to be a sweet spot for a lot of products. But as the pitches go beyond 40μm, there will be pressure to add more layers and/or smaller lines, spaces, and vias.

SE: Fan-out packages are prone to die shift and warpage. What’s happening here?

Kelly: In the old days, stress was the bane of your existence in packaging. That’s still there. One of the biggest challenges in new single-die and multi-die packages is warpage. Unfortunately, silicon has a coefficient of thermal expansion of around 2. That’s 2 parts per million of expansion for every degree C that it heats up or cools. All of the organic materials that we use around it are 10 or larger. When they are in intimate contact with one another in a package, and it is heated or cooled, you are expanding and contracting differentially, depending on where you are in the stack. That’s make things move out of plane. There is no such thing as a flat package. It has some sort of warpage or curvature to it. It may not be visible to the eye, but it’s always there. And that adds stress, too. Warpage is something we have to manage. We have good tools for managing it these days. We have a much better materials selection than we did 10 years ago. It’s getting easier to manage warpage for a given size, but the sizes are increasing at the same time. So we are chasing after a moving target.

SE: Fan-out packages are now incorporating high-bandwidth memory (HBM). How many HBMs can you incorporate in fan-out?

Kelly: Two or four HBMs are no problem. As you go larger, you have to worry about warpage. You have to worry about moving stress around in the module. The question is, can you manage the warpage? Can you manage the power? Can you keep all of the modulator connects at a pitch that makes sense? Can you manage the high currents and the electromigration? As you get bigger, it’s not a linear increase in the challenges. It’s more of an asymptotic increase.

SE: What about 2.5D?

Kelly: 2.5D is the mainstay for high-end AI products, particularly GPUs. That’s a big and growing market. 2.5D is used in data centers to take zettabytes of data and run it against their algorithms to improve the algorithm. When your voice recognition on your phone works better, it’s not because your phone got better. It’s because these high-end, AI GPUs can process more data and the algorithms are better. All of that training takes place in the data center.

SE: In 2.5D/3D and other packages, there is a lot of discussion about reticle sizes. What does that mean?

Kelly: Normally, when people say reticle size, they are talking about a semiconductor fab reticle size. When you talk about 3X or 4X the reticle size in packaging, it’s a terminology for how big the interposer could be. In 2.5D, you could have two or possibly four ASICs. Six HBMs is relatively mainstream. You could see eight, maybe 10 HBMs. It’s going to top out there. It’s not just how many HBMs you’re getting in the package, but it’s also how effective is the package. Maybe you are better off taking that giant 2.5D package and putting it in two packages. Then, you need to find a way to do that and look at all of the system challenges like thermal power and electric power management.

SE: Any other issues with 2.5D?

Kelly: They are large. The interposer itself is a relatively low-tech piece of silicon. It has physical routing on it. And then, if it’s a high-power device, you combine those interposers with embedded capacitors. That helps manage the voltage power deliveries into the chip. The interposer always has been somewhat of a challenge, because finding a source for interposers is difficult. And the interposer availability inside foundries is limited. You can make a lot more money making 5nm chips than you can interposers. Economically, it’s not a good business for a fab. The fab wants to sell high-end silicon. The question is will we ever move from silicon interposers and go into organic interposers for HBM-based products? They are more readily available in the supply chain. Or will we stay with silicon? The jury is still out. For a while, it’s going to be silicon. It’s reliable. It’s robust. These are long-lived products that end customers don’t necessarily want to mess with because they work.

SE: Do you envision 2.5D with HBM3 coming out soon?

Kelly: People who are on the cutting-edge of AI are already working on getting those products ready.

SE: Where do chiplets fit in?

Kelly: To me, a chiplet is where you take a piece, or pieces, of what was a single SoC and break out some of the functional blocks, or collection of functional blocks, that were originally part of a discrete SoC. Then, the chiplets must be re-integrated at the package level.

SE: We’ve seen some companies develop chiplet-like designs using die-to-die interconnects, right?

Kelly: There’s two camps here. First, there are companies who are on the leading-edge in this competitive market. You have leaders out there like AMD, Intel and a few others. They have invested heavily in their own die-to-die chiplet bus interfaces. Some of these are proprietary. Those designs have given them a competitive advantage. They are not going to tell the rest of world exactly how they’re doing their chiplet interfaces. They need that advantage in this competitive high-performance marketplace. There is also another camp. There are lots of products that will need to migrate from where they are as an SoC today. Maybe they are a year or several years from that. They also will need chiplets for the same reasons as the others. They need to manage costs in a time-to-market environment with limited engineering resources.

SE: The other camp will require several technologies to enable chiplets. For example, to connect one chiplet to another in a package, they will require die-to-die interconnects, right?

Kelly: There are open-source die-to-die technologies from the Open Domain-Specific Architecture (ODSA) sub-project. Multiple companies are working on this together. These technologies are very competitive, meaning they have plenty of bandwidth to support various chiplet architectures. They are flexible enough to support fine pitches, or even larger pitches if it’s an MCM (multi-chip module). Once again, there are going to be two tiers. The top tier is developing their own die-to-die interfaces, which are mostly proprietary. Then you are going to have a growing world that needs chiplets for their own performance, cost, and time-to-market reasons.

SE: In the future, let’s say a company wants to engage with an OSAT to develop a chiplet design using these interfaces. How will this play out?

Kelly: The bus selection, the bus qualification and the bus design are always going to reside in the ASIC or processor design community. Down the road, let’s say if a merchant exchange is open enough, people can source physical silicon from a store. Then, you need to get prototypes built, so you go to an OSAT. That might be a business model that you could see in the future. But it’s a lot more complicated than that, because it takes huge simulation capability to make sure things are going to work during your design phase. Our customers now do that, although we have seen a few customers coming to us for more full-service electrical validation of products. That’s a slowly growing trend. I mentioned the two tiers. As that second tier begins to develop more products, we will likely see more of the design cycle moving inside the OSAT.

SE: What else needs to happen?

Kelly: We are cognizant of these bus types. What we need to understand as an OSAT is what packaging technology is required to wire it up and make it work. Usually it boils down to just a few simple things — bump size, bump pitch, line widths, vias, and maybe layer counts. So we need to understand these buses and how they impact the package. At the end of the day, we’re not actually doing the electrical design, but we’ll see more of that over time. Essentially, the OSAT won’t care whether the die-to-die interface is XSR, AIB, or whatever, as long as you’ve developed what’s going to be needed in advance. It takes a year or two to get significant packaging advances in place and ready.

SE: What about hybrid bonding? Can the OSATs do that?

Kelly: Definitely. We are getting close to a point where you can buy the technology. And with some investment and your own development, you can get there. So there’s not a huge technical hurdle for an OSAT to do it. It depends on what is a valid business case that would compel an OSAT to get into that business. We are digging deep on understanding the technology.

SE: I assume OSATs like Amkor will continue with bump pitch scaling?

Kelly: You can certainly push your pitch down. We have demonstrated sub-20μm pitches in die-to-die and die-to-wafer with classical copper lead-free bumps. If you’re going below 20μm, or somewhere in between 10μm and 20μm, you’re going to need to move to copper-to-copper hybrid. Managing small pieces of solder caps on tiny little solder bumps has its own distribution of available solder mass. And at some point, those aren’t going to be reliable. We generally push as hard as the customer needs to go, and maybe a little bit further. But somewhere between 20μm and 10μm, customers will jump to the hybrid approach. It has a lot of advantages. The power between the die is low. The electrical signaling path is excellent.

SE: Does the packaging industry need new breakthroughs?

Kelly: I wish somebody would invent a higher CTE-based silicon. That would help us a lot. If we had lower stress as well as CTEs from different materials that were closer to one another, we would have half the challenges that we have in packaging today. Silicon is complicated. It’s a mixture of high-CTE metals and organic materials with low-CTE bulk silicon. It’s a very non-homogeneous system. You start with this bulk silicon wafer, and then you process practically everything that’s in the stack. So it’s mechanically a little more predictable. If we can come up with material sets where our CTE differences between silicon shrinks, then larger systems will be easier to do. Warpage won’t be as challenging. Stress will be lower. Reliability will be better. And cost targets will be easier to meet.



1 comments

Craig Franklin says:

Mark, as always, a very enjoyable article. For sub-1um RDL structures, including the dielectric and Cu metallization, there have been discussions utilizing CMP which seems very expensive for a “packaging” process. I would be interested in non-CMP approaches.

Leave a Reply


(Note: This name will be displayed publicly)