中文 English

The Path To Known Good Interconnects

Heterogenous integration depends on reliable TSVs, microbumps, vias, lines, and hybrid bonds — and time to digest all the options.


Chiplets and heterogenous integration (HI) provide a compelling way to continue delivering improvements in performance, power, area, and cost (PPAC) as Moore’s Law slows, but choosing the best way to connect these devices so they behave in consistent and predictable ways is becoming a challenge as the number of options continues to grow.

More possibilities also bring more potential interactions. So while next-gen applications in AI, 5G, high-performance computing, mobile, and wearables all benefit from various combinations of disparate devices in compact packages, just sorting through the growing number of interconnect choices is a challenge. But the upside is the industry is no longer hamstrung by a set of rules, and the number of possibilities for customizing and optimizing systems is exploding.

“The beauty of heterogeneous integration is now it doesn’t always apply to just electrical,” said Chip Greely, vice president of engineering at Promex. “You can put electro-mechanical devices into your package, as well. With some of our product segments — for example, medical cameras — we’ve got mechanical and electrical functions going together in small footprints. And if you want to have a robust manufacturing process, you’re trying to make your interfaces as tolerant as possible to any misalignments or any variations in placement accuracy, including mechanical interfaces.”

Samsung, Intel, TSMC, and many other device makers are focusing on optimizing die-to-die and die-to-package interconnects in various architectures, whether constructed vertically with microbumps, hybrid bonding, and bridges, or horizontally with fan-out redistribution layers. Deciding how and where interconnections will be formed is a becoming a big part of system integration.

The number of packaging options is growing because many of these new designs are highly customized for specific applications. So how they are constructed and connected often depends on the amount and type of data that needs to be processed, where it needs to be processed, and how much power is available. Case in point: Tesla’s D1 Dojo chip, a 50-billion transistor chip used to train AI models inside Tesla’s data center. The emphasis here is on massive data throughput, using highly parallel computation with built-in flexibility, said Pete Bannon, vice president of low voltage electronics at Tesla, in a recent presentation.

Tesla’s device includes 25 D1 chiplets in an array, based on TSMC’s Integrated Fan-Out (InFO) technology. Bannon said the device can achieve 9 petaflops, moving at a speed of 36 terabytes per second using 576 lanes of an I/O ring. It also includes 3 narrow RDL layers and 3 thick RDL layers.

TSMC’s roadmap, meanwhile, calls for new low-resistance interconnects that can decrease resistance by 40%. Fabricated not by damascene but by subtractive metal reactive ion etch with airgap instead of dielectric, the scheme can decrease capacitance by 20% to 30%, and eventually 2D interconnect materials to replace copper interconnects, according to Yuh Jier Mii, TSMC’s senior vice president of R&D. “With less resistivity, there is a potential for future scaling with enhanced interconnect performance,” Mii said in a recent presentation.

Fig. 1: Reconfiguration and interconnect paths from chip on board to heterogenous integration. Source: TSMC/IEDM [1]

Roadmaps in heterogenous integration are moving to more chip-chip stacking by hybrid bonding, greater use of silicon bridges, and silicon dioxide and polymer interposers of increasing size. There’s a proliferation of architectures and package types to meet different end uses.

Different architectures, priorities
“Advanced Packaging architectures are expected to lead to the exponential growth of I/O interconnects,” said Seung Wook Yoon, vice president of corporate at Samsung Electronics (see figure 1). Yoon provided details of the company’s Advanced Package FAB Solutions (APFS) for chiplet integration at IEDM[2], highlighting four key processes in advanced packaging flows — thin wafer dicing, hybrid bonding, thin wafer debonding (with zero stress), and vertical interconnects. “For chiplet technology, wafer thickness and bump pitch are key parameters. At present, the most advanced HBM packages have a wafer thickness of less than 40µm, and stack more than 16 dies into a single package.”

Samsung has four different packaging configurations: 2.5D RDL (R-Cube), 2.5D silicon interposer (I-Cube), a 3D-IC stacking, X-Cube microbump with hybrid bonding, and a hybrid interposer (H-Cube).

Fig. 2: Rising interconnect counts in high bandwidth memory and AI/high-performance computing. Source: Samsung/IEDM [2]

Rising electrical, mechanical, and thermal issues are driving HI process solutions, as well. For example, TSMC showed how it is addressing noise problems in a system comprised of 4 SoCs and 8 HBMs on a 50 x 54 mm organic interposer on a 78 x 72mm substrate (see figure 3). [1] In this design, the microbumps for die-die connection have a minimum bump pitch of 35µm. The organic interposer (50 x 54mm or 3.3X reticle size) contains around 53,000 redistribution layer lines.

Fig. 3: Approximately 53,000 fine pitch 2um RDL lines form a total length of 140 meters, connecting 4 SOCs and 8 HBMs with an organic interposer on a laminate substrate (CoWoS-R). Source: TSMC/IEDM

TSMC used a discrete decoupling capacitor integrated on the C4 bump side of its interposer dielectric, very close to SoC devices, to ensure fast suppression of power domain noise. That, in turn, enhances the signal integrity of HBM at high data rates.

Thermal issues, while not new to the semiconductor industry, become exacerbated when more computing and power management devices are placed in close proximity to one another. Greely pointed to combinations like memory and power management ICs, which generally must be segregated within a package. “Power management is like a good-old-fashioned hand warmer, while memory doesn’t like to get above 85°C, much less 100°C.”

Interposers, whether silicon-based or polymer-based films, facilitate interconnections and act as stress relief buffers for heterogenous chip stacks. Stress management, along with die shift minimization, are ongoing issues that fabs are starting to address from the architectural planning and process sides.

ASE presented details of three of its vertically integrated fan-out package lines at IEDM. [3] “With 2.5D and 3D, we are seeing an increase in density and bandwidth. But we also see an increase in cost, which led to the development and introduction of our ViPak platform,” said Lihong Cao, senior director of engineering and technical marketing at ASE. “By using silicon bridge, the L/S chip-to-chip interconnect can scale to 0.8µm, or even 0.65µm. So in this, you put the die on last, but put the bridge die on the carrier and connect using copper pillars. And there are two molding steps. The first is to protect the bridge die. So I don’t use RDL for interconnecting, the connections are through the bridge die, and you can design the bridge die using a 65nm process and then attach the chips last.”

Heterogeneous systems are systems or subsystems in their own right. They require system technology co-optimization (STCO), which was a major theme at IEDM as it celebrated the 75th anniversary of the invention of the transistor, with a look forward to the next 75 years. “The best way to celebrate the transistor is to look forward to how we can ensure we bring as much innovation over the next 75 years,” said Ann Kelleher, general manager of technology development at Intel. [4] “System based technology co-optimization (STCO) is the next evolution of Moore’s Law.”

STCO elevates design technology co-optimization to the system level, optimizing design tools for one or more manufacturing processes. The next phase, according to Kelleher, “is what I call working from the workload in.” That encompasses all aspects of the system and software down through the fab process (see figure 4), simultaneously optimizing system design, software, devices, interconnect, transistors, etc.

Fig. 4: STCO begins with the workloads and considers all aspects of fab and packaging manufacturing and design as well as software and system architecture. Source: Intel/IEDM[4]

On the process technology side, Kelleher pointed to changes in transistors to gate-all-around FETs in 2023, high-NA EUV in 2025, next-generation interconnect metals, ferroelectric materials, and the eventual incorporation of optical interconnects.

Hybrid bonding
Hybrid bonding, so-called because it simultaneously bonds copper-to-copper pads and dielectric-to-dielectric fields, provides the ultimate in vertical connectivity. Relative to copper microbumps, hybrid bonds drives signal delay to near-zero while enabling 1,000X greater bump density. Microbump pitches currently are above 35µm. For hybrid bonding, pitches of less than 20µm are being evaluated.

“We are engaging with customers on several interesting hybrid bonding use cases, including high bandwidth edge AI devices and RF components. The benefits of applying hybrid bonding can be higher performance and/or greater functionalities within form factor constraints, depending on the application,” said Tony Lin, director of technology development at UMC.

Clean interfaces and precise alignment are key elements of production-worthy hybrid bonding processes. Both wafer-to-wafer bonding and chip-to-wafer bonding processes are available. W2W is more mature, but it requires chips of the same size, offering little flexibility. Chip-to-wafer flows are more complex and are subject to die placement alignment inaccuracy. One method for improving placement accuracy is to perform collective D2W bonding of many die at once (see figure 5). [5] There are various methods for debonding, too, with a focus on minimizing substrate stress, lowering cost, and improving throughput.

For instance, thermal methods are low cost, but they introduce stress, and throughput is low. Chemical methods can be performed at room temperature, but throughput is low again, according to Alvin Lee, deputy director of Brewer Science. Laser debonding offers faster throughput and low stress, but the equipment cost is high. Next-generation photonic debonding uses high intensity light to rapidly debond wafers from glass, introducing little stress with a more moderate tooling cost, notes Lee. The collective D2W hybrid bonding is an enabling technology for fan-out packaging.

Fig. 5: Process flow for collective die-to-wafer hybrid bonding provides higher throughput and superior alignment accuracy than individual pick-and-place. Source: Brewer Science

One of the added benefits for early adopters of hybrid bonding may be their ability to achieve performance gains that would be the equivalent of a technology node transitions. “Our customers continue to have the need to achieve faster performance, greater power efficiency, and lower cost in their IC designs, which in the past was enabled by shrinking transistors,” said UMC’s Lin. “As it becomes more challenging and costly to keep up with Moore’s Law, hybrid bonding can deliver the performance improvements our customers seek, making it a flexible alternative solution to technology node migration.”

Intel revealed its R&D progress in hybrid bonding, with extension from 10µm pitch copper-copper bonds in 2021 to 3µm pitch bonds last month (see figure 6). [6] Some of the new process modules specifically optimized for hybrid bonding included tuning the PECVD oxide deposition process to deposit thick (20µm), low-stress films, improving oxide CMP slurry for faster polishing, and creating high aspect ratio etching and filling processes for through dielectric vias.

Fig. 6. Hybrid copper-copper bonds with 10µm pitch were demonstrated in 2021 and 3µm pitch bonds in 2022, a 1,000X density increase. Source: Intel

But there also are kinks to work out of these processes, and that will take time. For example, die shift can be a significant issue for advanced packaging and heterogenous integration. “Maybe your interconnect pads are oversized so that you can compromise for any die shift,” said Greely. “When you put down an RDL layer, registration is going to be key.”

An interposing structure
The interposer itself is not a discrete component. It’s an intermediary construct between the die (or dies) and the laminated substrate below. Although the industry often refers to silicon interposers, the material that makes up a silicon interposer is all dielectric, a silicon dioxide. Polymer-based interposers are significantly less expensive than silicon interposers, but they lack the reliability in certain applications.

TSMC explored the advantages of organic interposers in terms of electrical performance, warpage control, yield, and reliability. “Transmission loss is a function of line length. For a fixed energy-per-bit power consumption design budget, the interconnect length needs to be shortened to achieve high bandwidth,” said Shin-Puu Jeng, director of the Backend Technology Service Division at TSMC.

The foundry has been working on improving reliability in its stacking technology. “The advantage of CoWoS-R is greater when you go to high speed because the advantage of RC degrades slower at high frequency,” said Jeng. The organic interposer in CoWoS-R consists of copper lines in polymer (dielectric constant = 3.3). “Very dense vertical connections enable a low impedance power delivery network. [1] The simulated eye diagram of Cu/oxide, thinner Cu in oxide, Cu in polymer, shows there is greater flexibility of line length in Cu in polymer. In the case of CPU-to-HBM interconnects, the long RDL interconnects (L/S = 2µm/2µm) were made thick (4µm) in order to reduce the loading for high-speed data transmission, but also to improve the IR drop for the power delivery network. There is lower insertion loss in via in polymer relative to thin or thick TSVs. RC delay impacts power consumption. Power delivery has a horizontal and vertical delivery component. Very dense vertical connection provides low-impedance PDN. The decoupling capacitor is important to suppress power noise and to enable stable voltage supply.”

Building bridges
Intel and TSMC have been using proprietary silicon bridge technologies to interconnect high-bandwidth memory modules and CPUs/GPUs. And ASE recently introduced a packaging platform with embedded bridge, capable of connecting chiplets-to-chiplets with 0.8 µm lines and spaces (FoCoS-B).

“Due to the inherent fanout RDL process limitation, FOCoS-CF and FOCoS-CL (chip first and chip last) solutions have hit a bottleneck for the manufacture of RDL with high layer count (>6 layers) and fine line/space (L/S = 1µm/1µm) for the applications that require high-density die-to-die connections, high input/output counts, and high-speed signal transmission,” said ASE’s Cao. FOCoS-B offers several options for multiple bridge die integrations. In one example, 8 silicon bridge dies are embedded in two identical fan-out RDL structures with 2 ASICs and 8 HBM2e modules. They are mounted using two identical fanout modules, which are assembled on one flip-chip BGA substrate in an MCM (see figure 2). The FO modules are each 47 x 31mm, and the package body size is 78 x 70mm.

Fig. 7: Fanout chip on substrate bridge (FOCoS-B) schematic (above) and cross-section (below) enables smaller die-to-die connections (0.8µm) than is possible with RDLs. Source: ASE/IEDM

Cao explained that ASE engineers also generally compared the 2.5D with chip last and chip first FOCoS approaches in terms of insertion loss, warpage and reliability. Both FOCoS approaches demonstrated superior electrical performance to 2.5D Si TSVs due to the elimination of a silicon interposer and reduced parasitic capacitance and crosstalk. Package-level warpage, primarily induced by CTE (coefficient of thermal expansion) mismatch between the die and substrate and the fanout modules, showed better warpage control, and all packages passed open/short and functional tests prior to assembly, as well as reliability stress tests to JEDEC conditions.

But this still isn’t simple. “When I was designing BGA substrates, copper balance was pounded into me to ensure we made good straight, flat board substrates,” said Promex’s Greely. “Now, copper balance is a problem at the individual package level, where I’m putting in 7, 10, 12 different devices, die attaching them onto a substrate at different temperatures, and I’m getting a 12- to 14-micron variation of warpage from one temperature to another. If I’ve got a 50 millimeter substrate, and it’s got 250 microns of deflection, concave at room temperature, and it goes the other way at 300 degrees, now it’s convex. And I’m trying to put a nice piece of 25 micron backgrounded silicon down on that thing and expect it to stay in one piece after it cools back down to room temperature. That might be an extreme example, but these are serious challenges.”

Thermal management
In packages, more than 90% of the heat dissipates out the top of the chip through the package to a heat sink, typically anodized aluminum-based with vertical fins. Thermal interface materials (TIMs) with high thermal conductivity are placed between the chip and package to help transfer heat. Next-generation TIMs for CPUs include metal sheet alloys (like indium and tin), and silver sintered tin, which conduct 60W/m-K and 50W/m-K, respectively.

Engineers and materials suppliers continue to explore alternative TIMs. “Materials that used to be exotic are becoming less so,” said Nathan Whitchurch, senior staff mechanical engineer at Amkor Technology. “So with sintered silver, you end up with a very high thermal conductivity matrix of a silver alloy between lid and die. Another is softer TIMs — indium-base types of things. A couple years ago we were talking pretty frequently about phase-change materials. That seems to have died off as people realized the reliability and the advantages just weren’t there. And things like graphite pads have engineering challenges that are too difficult to overcome. Graphite in a single direction is highly thermally conductive but getting that into packages is a difficult challenge. So that’s where we’ve seen the more exotic materials become less exotic over time.”

Chiplets in advanced packages are electrically interconnected through solder, microbumps, RDLs, and hybrid bonds. All those connections need to be reliable for the life of the module. As the types of packages proliferate and new, lower stress processes come in, engineers are finding the flexibility that heterogenous integration provides may be worth all the challenges.

Discussions about chiplets and heterogenous integration don’t often reference just how early the industry is in adopting this new paradigm. “UCIe is a very good open standard,” said Bill Chen, CEO of ASE. “And some people are running faster than the standards. But then there will be feedback from users.” That feedback loop will then provide more insight into what is required going forward. Additionally, there will be learning within the supplier-customer ecosystem as to what types of heterogenous integrations, assembly techniques, processes, design tools, etc., work best. It will be a process.

“The semiconductor is only just beginning its journey of chiplet and heterogenous because device scaling is becoming so difficult and expensive, and the PPAC is shrinking with each advanced node,” said Samsung’s Yoon. “Chiplet design standards will become more commonplace, and more predictable ways of putting these devices together will take over. But all of this will take years, requiring the collection of big data, collaboration within partners, and cross-value-chain experimentation to determine what works.”


  1. S.-P. Jeng and M. Liu, “Heterogenous and Chiplet Integration Using Organic Interposer (CoWoS-R), IEEE International Electron Devices Meeting (IEDM), Dec. 2022, paper 3.2.
  2. S.W. Yoon, “Advanced Package FAB Solutions (APFS) for Chiplet Integration,” ibid., paper 3.6.
  3. L. Cao, “Advanced Packaging Technology Platforms for Chiplets and Heterogenous Integration,” ibid, paper 3.3.
  4. A. Kelleher, “Celebrating 75 Years of the Transistor: A Look at the Evolution of Moore’s Law Innovation,” ibid, paper 1.1.
  5. A. Lee, “New Developments in Materials Technology for Advanced Packaging,” Heterogenous Integration Global Summit, Sept. 2022.
  6. A. Elsherbini, “Enabling Next Generation 3D Heterogeneous Integration Architectures on Intel Process,” IEEE IEDM, Dec. 2022, paper 27.3.


Arpan Bhattacherjee says:

Great article! Thanks for raising awareness of this growing concern – it will certainly require a paradigm shift in industry collaborative workflows as well (FYR: https://doi.org/10.31399/asm.cp.istfa2021p0108)

Leave a Reply

(Note: This name will be displayed publicly)