Why multi-die solutions are getting so much attention these days.
Calvin Cheung, vice president of engineering at ASE, sat down with Semiconductor Engineering to talk about advanced packaging, the challenges involved with the technology, and the implications for Moore’s Law. What follows are excerpts of that conversation.
SE: What are some of the big issues with IC packaging today?
Cheung: Moore’s Law is slowing down, but transistor scaling will continue. The packaging industry and the OSATs need to develop technologies to fill the gap. So you will see more SiP (system-in-package), silicon photonics and sensors. Power delivery, power efficiency and interconnect density are the focus moving forward. Nonetheless, advanced packaging is sorely needed for a number of applications. At 7nm, CMOS scaling becomes too expensive. The development cost and the wafer cost become almost unbearable for most companies, so you need to put a solution together with different technologies. You use different chips from different foundries. Advanced packaging, and especially SiP, are playing a role there. The OSATs are helping the industry to lower those costs and continue CMOS scaling, but using SiP to advance it using much more effective and cost-compelling approaches.
SE: In traditional chip scaling, the idea is to pack the transistors and IP blocks all on a monolithic die. What are challenges at advanced nodes here?
Cheung: Yield will be a major issue when you squeeze more functionality into a single die. We have tried in the past to integrate analog and memory with logic. Then, the die size and process complexity become prohibitive. We know that analog and memory processes don’t scale as digital logic.
SE: Advanced packaging offers more options than ever. You have 2.5D/3D, fan-out and SiP. Then, there are chiplets. How do packaging customers determine which technology is the best one?
Cheung: All of these technologies have their own unique sweet spot for different applications. We work closely with our customers to understand their application needs, and then select the right technology to meet their needs. In fan-out, for example, package size, I/O density, and the number of dies involved all need to be considered to meet the mechanical and I/O density requirements. For 2.5D, the same considerations are addressed, so that the cost of the package will be justified.
SE: So, no one IC package can meet all requirements. The choice depends on the application, right?
Cheung: Exactly. You need to work with the design team (circuit and package design groups) to define the most practical cost-effective packaging technology to support their needs. For example, fan-out is good for certain applications. 2.5D also has a sweet spot for application needs.
SE: Isn’t there a way to segment these packaging technologies?
Cheung: If you look at the roadmaps, you can divide it into flip-chip, fan-out and 2.5D in density and package size. Density refers to the number of I/Os. Right now, 2.5D can handle the most I/Os. It’s mostly for HBM (high bandwidth memory) and ASICs. 2.5D can handle I/Os and power grounds in excess of more than a few hundred thousand bumps. For fan-out, it’s a medium-size density and package size. Then, for BGA, you are talking about a few hundred to a thousand I/Os as well as power and grounds.
SE: Fan-out is gaining momentum. In fan-out, the dies are packaged while on a wafer. It provides more I/Os and doesn’t require an interposer, making it less expensive than 2.5D. Where is fan-out heading?
Cheung: Fan-out provides a great option to support shrinking die sizes and increasing I/O density requirements. ASE’s FoCoS packaging technology has demonstrated that wafer-level fan-out supports heterogenous integration for multi-die, ASIC and memory integration, with the potential of reduced package costs. We will also see a lot more panel-level fan-out development in the next few years.
Fig. 1: Different fan-out approaches: traditional eWLB fan-out vs ASE’s M-Series fan-out.
SE: Chiplets also are creating a buzz—particularly the idea that you can have a menu of modular chips, or chiplets, that can be connected with a die-to-die interconnect scheme and then packaged together. What’s happening there?
Cheung: The idea behind chiplets is to reduce cost, while improving yield and performance. A library of chips can be used, such as high-speed interfaces, memory, accelerators and ASIC functional blocks. More importantly, many of these chiplets don’t require the latest technology nodes. It is supposed to reduce design cycle times and time to market. Our job is to develop a package platform to meet all of the interconnect requirements.
SE: What are the challenges for chiplets?
Cheung: With so many chips on the same package platform, issues related to thermal, warpage, CTE (coefficient of thermal expansion) mismatch, and interconnect density are the major challenges.
SE: Chiplets, fan-out and other packaging technologies enable so-called heterogeneous integration. Instead of packing more transistors on the same die at each node, as defined by Moore’s Law, the other way to get the benefits of scaling is by putting multiple and advanced chips in an advanced package. What’s involved here?
Cheung: Heterogenous integration tackles the age-old issue by combining chips with different process nodes and technologies. The die-to-die interconnect distances are so close that it mimics the functional block interconnect distances inside the SoC. For heterogeneous integration, one of the big pushes is complexity and interconnect density. We are assembling 100,000 microbumps with a 55µm pitch. This is with traditional copper pillars and solder tips. The question is can we reduce the bump pitch further, so that it can yield in a robust manufacturing environment.
SE: The industry faces some challenges to extend today’s copper pillar and microbumps beyond a certain pitch. What’s next here?
Cheung: People are working on a lot of new interconnect technologies, such as copper-to-copper bonding. They are more in a laboratory development phase.
SE: How about copper nano-paste technologies for future interconnects?
Cheung: Academia and manufacturing houses need to work together to bring these ideas from the laboratory to production.
SE: If the industry continues to use today’s interconnect technologies, like copper pillars and microbumps, what does that mean for IC packaging in advanced chip designs?
Cheung: As we shrink the process nodes, you are able to implement more functions on a very small chip area. But the I/O requirements will increase in order to bring out the functionality. The silicon is getting very expensive, so you don’t want your I/O requirements to dictate your die size. In other words, you don’t want to make your die size larger to accommodate the number of I/Os. So you want to reduce your I/O pitch. How do you route them out? That’s why fan-out density (bump pitch, line/spacing) will play a major role in designs.
SE: What are the implications for a 7nm design?
Cheung: You need more I/Os. You are able to integrate more functional blocks into the die. So you need more I/Os to route the functions. But the I/O pitch becomes a major handicap. It’s hindering how much functionality you can squeeze into the die.
SE: How do we resolve this?
Cheung: You need solder interconnects and finer pitch interconnects. The microbumps for an HBM connection are 55µm. It’s a 25µm copper pillar bump or microbump and then 30µm spacing. But in order to put more I/Os in the same area, you need to shrink that pitch. Each microbump supports a power and ground and then the I/Os.
SE: What happens when the IC industry migrates to 5nm and 3nm designs?
Cheung: I would expect the cost for 5nm and 3nm will increase. The ability to connect more I/Os and route them out will continue to be a challenge. I suspect the ultra-low k dielectric materials for the die interconnects will continue to evolve. How do we handle these wafers will present another challenge for OSATs.
SE: Instead of chip scaling, there are other options. The chiplet concept is one approach. Isn’t that approach just another version of 2.5D?
Cheung: It’s an evolution of 2.5D. Now, we are talking about assembling multiple chips using a silicon interposer, as well as wafer-level or panel-level fan-out.
SE: With chiplets, don’t we have the same challenges as 2.5D, such as known-good-die (KGD) and who takes responsibility for the process? How do we resolve that?
Cheung: Known-good-die becomes a more important issue for chiplets because of the number of dies involved. The cost for rejecting a chiplet package will be very high. For that reason, the design community, wafer foundries and OSATs need to work together to reduce the design and process-induced defects, and establish a methodology on how to test and screen out process and functional defects before they get to the final product stage.
SE: Intel and TSMC have talked about their respective chiplet efforts. Where do OSATs fit in the chiplet and the heterogeneous integration landscape?
Cheung: OSATs like ASE will be the preferred integrators of chiplets for most customers in the industry. The OSATs have been supporting wafers from the wafer foundries all over the world. It is our business to provide services to assemble different chips with different process nodes. There is no IP or conflict of interest here.
SE: The OSATs have been providing these services for years, right?
Cheung: Yes, this is our business model. Our challenges are to work with different customers and foundries to make sure we can work with silicon from different process nodes, and meet the thermal and mechanical design requirements.
Related Articles
What’s Next In Advanced Packaging
Wave of new options under development as scaling runs out of steam.
Focus Shifting From 2.5D To Fan-Outs For Lower Cost
Interposer costs continue to limit adoption of fastest and lowest-power options, but that’s about to change.
Chiplet Momentum Builds, Despite Tradeoffs
Pre-characterized tiles can move Moore’s Law forward, but it’s not as easy as it looks.
Moore’s Law Now Requires Advanced Packaging
Shrinking features isn’t enough anymore. The big challenge now is how to achieve economies of scale and minimize complex integration issues.
Is anyone looking at running liquid cooling through these systems in package? Not feasible for mobile perhaps, but on servers that would ease the complexity of different heights and power densities while retaining the advantages of the high performance packaging.
Hi Tanj,
This response is from: Rich Rice, senior vice president of business development at ASE. [email protected]
”Most of the cooling is done through attached methods that are on the outside or top side of the package. Both air and liquid are used, but external methods are mostly air cooled heat sinks today.
This is an area for more research to get the cooling closer to the chips, and make the heat transfer more effective without negatively affecting the reliability of the package.”