How manufacturing and packaging will shift over the next couple decades.
Part one of this forecast looked at evolving transistor architectures and lithography platforms. This report examines revolutions in interconnects and packaging.
When it comes to device interconnects, it’s hard to beat copper. Its low resistivity and high reliability have served the industry exceedingly well as both on-chip interconnect and wires between chips. But in logic chips, with interconnect stacks rising into the 14-level range and resistive-capacitive (RC) delay becoming a bigger portion of total delay, fabs are looking to alternative metals to maintain performance.
One option for reducing RC delay and helping to shrink standard-sized cells is backside power delivery. This somewhat radical proposition delivers device power through the back side of the chip instead of the front side, relieving interconnect congestion and improving power delivery. A second option is hybrid bonding, which offers a variety of advantages including the ability to combine different devices with minimal delay.
Before IBM developed the dual damascene method for depositing copper interconnects into lines and vias, the industry used aluminum in subtractive deposition and etch schemes. Now, copper interconnects are reaching their scaling limit due to the influence of the liner metal (typically cobalt) and barrier on resistivity. Alternative metals do not require liner or barrier, but their integration is likely to require a transition back to deposition and etch processes. This change in integration scheme represents a huge change in interconnect processes — where dual damascene on wide interconnects and subtractive etch schemes on narrow interconnects would run on the same production lines.
Ruthenium and molybdenum appear to be the strongest candidates for replacing copper, with first implementation expected in the buried word lines of DRAM or the finest metal levels of logic devices.
“Controlling oxidation of the metals during and after etching will be a big challenge, especially in cases where high-aspect-ratio metal lines are used to attain lower resistance, where integrating air gaps between the lines will be desirable,” said Robert Clark, senior member of technical staff and technology director of TEL. Air is the ultimate low-k material (k = 1.0), but it sacrifices structural support, unlike low-k dielectrics (3.3) and silicon dioxide (3.9).
Nonetheless, leading chipmakers and tool suppliers are pursuing subtractive Ru and Mo etching with air gap as dielectric. In terms of the two metals, ruthenium is far less susceptible to oxidation, making it more compatible with etching and cleaning processes. Molybdenum, which oxidizes easily, is more compatible with damascene flows.
IBM and Samsung developed a ruthenium and air gap integration scheme that addresses an impending issue with closely spaced, tall interconnect lines.[1]
“One of the challenges that we’ve seen is that we get line wiggling as we try to fill these tight pitch lines by CVD,” said Chris Penny, senior engineer at IBM Research. “We start to get into the cohesive forces pulling the lines together and you get significant CD variation or line collapse, which we presented at IITC.”
Penny described a top-via process flow using a spacer pull approach, which is similar to double patterning in dual damascene. The self-aligned litho-etch-litho-etch (SALELE) steps form the top via and the underlying metal line. “We transfer the patterning into the ruthenium directly, so it allows a lot of flexibility in the design space,” noted Penny. “You’re not limited to narrow lines and you’re not limited to wide lines.”
In efforts to extend copper processes as far as possible, chipmakers are eliminating barrier metal deposition (TaN) at the bottom of vias, which has a significant effect on via resistivity. The IBM/Samsung team demonstrated 18nm pitch ruthenium lines with aspect ratio up to 4:1 and surrounding air gap.
Fig. 1: Getting rid of the TaN at via bottom in copper dual damascene has roughly the equivalent reduction in via resistance as the subtractive ruthenium and air gap approach. Source: IBM Research
Backside power delivery
Another disruptive change in the way interconnects are fabricated involves backside power delivery (BPD) — moving power delivery to the wafer backside so that interconnecting levels above the transistors carry signals only. The reason for the split is because power delivery and signal transmission have different needs. Power ultimately follows a low resistance path (fatter wires), but large currents make it susceptible to electromigration. For signals, engineers want low capacitance and small cross-sections, but some resistance is okay. With 12 to 14 metal levels in advanced logic, there’s a rise in power density, and the supply voltage (IR drop) is significant.
Imec’s approach to BPD uses fine-pitch nanoTSVs (200nm pitch, 320nm deep) that extend from metal-0 down and land on the buried power rails with tight overlay control. They achieve this using a finFET test device by bonding the front side to a carrier wafer, thinning the wafer, then etching and filling the TSVs. By incorporating a backside decoupling capacitor (metal-insulator-metal capacitor), IR drop is reduced even further. The design is scalable beyond the 2nm node because no standard cell area is consumed by the TSVs.
BPD can reduce the number of tracks in standard cells. In addition to imec’s approach, there are two other schemes for backside power delivery with increasing levels of process complexity. All three share the challenges of thinning wafers to ̴10µm. They need to align the backside to frontside connections, and there are concerns about series resistance — especially in the case of stacked chips. But once backside power distribution networks are established, chipmakers now have another degree of freedom to incorporate passive or active devices on the backside.
Perhaps the most compelling change in interconnect density is associated with hybrid bonding. In fact, hybrid bonding is being used to realize backside power distribution. Hybrid bonding involves the bonding of copper connections and surrounding dielectric to deliver up to 1,000X more connections per unit area than copper microbumps.
Wafer-to-wafer (W2W) hybrid bonding is more mature than die-to-wafer (D2W) hybrid bonding. “Alignment with die-to-wafer is significantly more complicated, because you are managing the position of the four corners of the die rather than the overall position of two wafers,” said Thomas Uhrmann, director of business development at EV Group. Wafer-to-wafer bonding has been most commonly used to bond pixel arrays to underlying chips in camera image sensors. “Hybrid bonding was a game changer in 2010 for image sensors. YMTC was the first NAND supplier to do hybrid bonding. In fact, most of the NAND flash companies doing hybrid bonding today first had experience with hybrid bonding in image sensors,” he added.
Fig. 2: Interconnect pitch and bandwidth for different levels of integration. Source: EV Group
Hybrid bonding’s key process steps include electroplating (ECD), CMP, plasma activation, alignment, bonding, singulation, and annealing. And though these tools are mature, for instance, for fabricating dual-damascene copper interconnects and flip-chip bonding, the processes need to be perfected for hybrid bonding’s needs. These include <100nm alignment accuracy, new levels of cleanliness in chip-to-wafer bonding and singulation tools, exceptional CMP planarity with 0.5nm RMS roughness, and plating for optimal bonding.
And while fabs are bonding nearly completed device to one another, chipmakers already are looking forward to using hybrid bonding at the transistor level, allowing combinations of GaN on silicon, for instance.
“When you start getting to the point where you’re combining transistors using hybrid bonding, that gets really interesting, because now you’re at a much tighter pitch than what we’re looking at for the packaging,” said Dean Freeman, an industry analyst. “Intel and others have done work combining GaN with silicon, which is very interesting. It’s a great opportunity for RF in communications devices, because now, you’ve got the logic combined with the speed of GaN — or eventually silicon carbide, or maybe even another material — to do the communications aspect of it into the terahertz wavelength range, which then starts to blow the millimeter wave out of the water that we’ve got with current technology for 5G.”
Advanced packaging
The grand transition from SoCs to multi-chip packages and systems really shifts the performance, power, and cost metrics from the chip to the system. “Performance issues are no longer just a silicon issue,” said Freeman. “It’s now into the packaging as to how do we stack the these chiplets and how do we manage to get the heat out? And power management always seems to be our Achilles heel.”
Heterogenous integration refers to the integration on different device technologies, such as co-packaged optics with logic, 2.5D microprocessor and HBM, and 3D-ICs that can bond memory, logic, high bandgap devices, RF, etc. These changes “are also critical in bringing emerging applications into the mainstream by enhancing performance, reducing power requirements, and increasing cost effectiveness,” said Steven Hsu, vice president of technology development at UMC.
Mike Kelly, vice president of advanced package and technology integration at Amkor, said that 2.5D and 3D integrations will extend across all semiconductor applications. “The challenges will be differentiated somewhat, however, between low-cost applications and the highest performance markets. Lower-cost applications will require innovation for high volume.”
“The transition to chiplets implies high-bandwidth interfaces between these chiplets, and this is the driving force for advanced packages. High bandwidth and small die sizes require high signaling speeds, and often wide interface buses,” said Kelly. “The latter is putting considerable pressure on smaller die bumps with tighter bump pitches. This, in turn, requires more advanced equipment for achieving good alignments between the dies and the interconnections. High accuracy placement, while still maintaining high throughput is very important.” He added that high speeds require the industry to keep pushing toward lower-k dielectric materials.
The chiplet question of how multi-die packages containing chiplets from different manufacturers will be assembled when companies are not, generally, open to sharing data about their chips may be resolved through mini consortia, which are popping up throughout the industry. “It will be some big players, and then they’re going push a certain type of platform or footprint, and between them they’ll make this work,” said Chip Greely, vice president of engineering at Promex Industries. “Then everyone else will be on the outside looking in to say, ‘How do I get in there?’ I’m envisioning three or four of these little consortia. And then the strongest company will take over at the end. But in the interim, you’ve got the chiplet idea, which can still be very functional. With flip chip we can easily put many chips into the same substrate, laid together with metal RDLs to connect all the interfaces, because the fundamentals of assembly — die attach, flip chip, and wire bond — haven’t changed.”
The line between front end and back end processes is not as clear as it once was. “The traditional boundary between FEOL and BEOL is blurring as 3D packaging, W2W/C2W bonding, and continued shrinking of inter-die interconnect packing density continue to gain traction,” said UMC’s Hsu. “This means FEOL and BEOL will be competing head-to-head in these highly contested segments, and indeed we have already seen foundries gradually expanding their services to include what are traditionally OSAT functions, especially in the advanced product segments. Taking a longer view, high integration of FEOL and BEOL will be a must to enable high performance systems, and this will have implications for the industry landscape further down the road.”
Dev Gupta, CTO of APSTL and chair of the Packaging Integration section of the International Roadmap of Semiconductors & Devices (IRDS), cautions that any technology forecast of packaging trends should reflect on knowledge gained in the past. “About two-thirds of all the technologies used in advanced packaging today were invented at Motorola and Intel decades ago.” Gupta pointed to electroplated solder bump flip chip and core and core-less organic substrates, for which he holds patents. “Thermo-compression bonding was used in 1995 for robotic assembly of GaAs RF modules in cell phones,[2] and in 1998 build-up organic substrates entered high-volume production. Core-less organic substrates were in production in 2002 for servers.”
Gupta emphasized that the goal of advanced packaging for high-performance computing has been to minimize packaging penalties from parasitic capacitance primarily, but also resistance and inductance. “New directions should be pursued to minimize impact on thermomechanical stress and reliability,” he said.
Lihong Cao, senior director of engineering and technical marketing at ASE, examined the different market segment for fan-out package-on-package (FOPoP), FO chip-on-substrate, and FOCoS-bridge at the recent IEDM conference. [4] For high-density die-die connections, bridge die enables 0.8µm L/S to communicate between dies, particularly in mobile packaging, high-performance compute, and AI/ML. On the other hand, she highlighted the continued usefulness of FOPoP as a key platform for high-density integration in a compact form – for application processors, mobile antenna-in-package, and co-packaged silicon-photonics applications. The lack of substrate eliminates parasitic inductance and enables a thinner overall profile.
The greatest manufacturing challenges in fan-out packaging include die shift after mold and warpage. Warpage is induced by the coefficient of thermal expansion mismatch between materials. Amkor has qualified FO approaches with up to 6 redistribution layers. Kelly does not forecast a need for greater than 6 layers, but he does anticipate RDL lines and spaces going from today’s 2µm to the 0.5 to 0.8µm range. “Although the lithography technology needed for sub-micron has existed for decades, newer versions of lithography equipment designed for packaging applications that can handle high warpage may be required,” he said.
In future years, silicon interposers may be replaced with organic interposers. “Despite their foothold in advanced packaging, Si interposers with Cu TSVs gradually will be supplanted by organic interposers for reasons related to cost (i.e., availability) and high-speed performance characteristics. In time, the minimum available feature sizes for organic interposers will be driven below 1µm for lines and spaces,” said Kelly.
Kelly also sees a need for 200mm wafer back-grinding and dicing equipment for SiC wafers as the industry increasingly adopts the larger wafers. “Most of the industry’s wafer bumping capacity is in 200mm and 300mm wafers. Prior to the recent introduction of SiC on 200mm wafers, it was extremely difficult to get a 150mm wafer with a flip chip bump,” he said.
Finally, the industry continues to make gradual improvements in the conductivity of thermal interface materials (TIMs) used between the chip package and heat sink, but there is a limit as to how conductive these materials can be. About 90% of the heat from semiconductor packages escapes from the top. TIMs are polymer-based materials with solid filter particles (alumina or silver), with increasing conductivity based on the particle load. However, Amkor’s Kelly notes that the thermal resistivity of these materials is typically limited to 10W/mK in an FCBGA. He added that the industry is evaluating graphite based TIMs. “Metal TIMs and solders, while having been used in packaging for many years, are penetrating a wider range of market segments where thermal management was historically less of a concern.”
Conclusion
As the industry increases its adoption of new interconnect materials, backside power delivery, hybrid bonding, and advanced packaging, much will be learned about the manufacturing details of these processes. Small improvements to copper interconnects will be made until all avenues are exhausted, such as removing TaN barriers at via bottoms in copper damascene, especially as new integration schemes pose significant challenges.
References
Related Stories
How Far Will Copper Interconnects Scale?
Creative methods keep extending the performance of copper lines and vias.
Challenges In Backside Power Delivery
BPD boosts performance, but requires wafer bonding, substrate thinning, and possibly new interconnect metals.
Hybrid Bonding Moves Into The Fast Lane
Companies are speeding ahead to identify the most production-worthy processes for 3D chip stacking.
Leave a Reply