What’s Next In Advanced Packaging

Wave of new options under development as scaling runs out of steam.

popularity

Packaging houses are readying the next wave of advanced IC packages, hoping to gain a bigger foothold in the race to develop next-generation chip designs.

At a recent event, ASE, Leti/STMicroelectronics, TSMC and others described some of their new and advanced IC packaging technologies, which involve various product categories, such as 2.5D, 3D and fan-out. Some new packaging technologies are trickling out in the market, while others are still in R&D. Some will never take off due to technical and cost reasons.

Some manufacturers are expanding their packaging efforts in other ways. For example, Samsung’s semiconductor division recently acquired the panel-level fan-out unit from another affiliate, Samsung Electro-Mechanics (SEMCO). With that move, Samsung’s semi unit will expand its efforts in fan-out, propelling it into the panel-level fan-out market.

Across the industry, packaging is playing a bigger role and becoming a more viable option to develop new system-level chip designs.  As a result, chipmakers and packaging houses are expanding their efforts.

Traditionally, the IC industry relied on traditional chip scaling and innovative architectures for new devices. In chip scaling, the idea is to pack more transistors on a monolithic die or system-on-a-chip (SoC) at each process node, enabling faster chips with a lower cost per transistor. But traditional chip scaling is becoming more difficult and expensive at each node.

While scaling remains an option for new designs, the industry is searching for alternatives. Another way to get the benefits of scaling is by putting multiple and advanced chips in an advanced package, also known as heterogeneous integration.

“In the old days, we tried to squeeze everything into one monolithic chip. But right now, it’s getting so expensive and the chip is getting so big,” said Calvin Cheung, vice president of engineering at ASE, in an interview at the recent IEEE Electronic Components and Technology Conference (ECTC). “Heterogenous integration tackles the age-old issue by combining chips with different process nodes and technologies. The die-to-die interconnect distances are so close that it mimics the functional block interconnect distances inside an SoC.”

There are several ways to implement a chip design using heterogenous integration, but this concept isn’t exactly new. Advanced packaging has been used in a limited form for decades in niche applications. The issue is cost as the technology remains too expensive for many applications.

At ECTC, several companies described new packaging technologies, which hope to address the cost and other challenges in the arena. Among them:

  • ASE described more details about a high-density fan-out technology that supports high bandwidth memory (HBM).
  • STMicroelectronics and Leti jointly described a 3D packaging technology using chiplets. In chiplets, the idea is that you have a menu of modular chips, or chiplets, in a library. Then, you integrate them in a package using a die-to-die interconnect scheme.
  • TSMC provided more details about its next-generation fan-out and 3D technologies.

New advancements in panel-level fan-out, flip-chip and wirebond were also presented at ECTC.

Wirebond and flip-chip
At one time, IC packaging took a backseat in the semiconductor industry. The package was simply there to house a chip at the lowest possible cost. That’s no longer the case. Advancements in packaging enable chips with smaller form factors. It also paves the way towards new and advanced forms of heterogeneous integration.

Today, a multitude of IC package types are targeted for different applications. “I like to break it down into mobile and high performance,” said Jan Vardaman, president of TechSearch International. “Mobile has a different set of packages. Mobile has to hit that steep ramp and has to be extremely cost-sensitive. Thin is also important, so you have more room for the battery.”

High-performance requires different packages with more I/Os. For both markets, there is no one package type that can meet all requirements. “Different people have different approaches,” Vardaman said. “There are many ways to get to the top of the mountain.”

Another way to segment the packaging market is by interconnect type, which includes wirebond, flip-chip, wafer-level packaging and through-silicon vias (TSVs).

Some 75% to 80% of today’s packages are based on wire bonding, which is an older technology, according to TechSearch. Developed in the 1950s, a wire bonder stitches one chip to another chip or substrate using tiny wires. Wire bonding is used for low-cost legacy packages, mid-range packages and memory die stacking.

With wire bonding the industry can stack and stitch together 16 flash memory dies with 32 die stacks in the works, according to Kulicke & Soffa (K&S). “To keep up with the lower profiles and high-performance demands of modern memory applications, higher I/O counts, more die stacks and the use of longer overhang structures is inevitable,” said Basil Milton, a senior staff engineer at K&S. “These requirements generate new challenges for wirebond process engineers.”

To extend the capabilities of wirebond, the industry requires systems with finer looping and stitch-bond formation. Today, the mainstream loop heights for wirebond are 300 to 400µm (15 to 20X wire diameter). At ECTC, K&S presented a paper, where it demonstrated 2X wire diameter loop heights at 35µm.

For processors and other chips, though, wirebond doesn’t provide enough I/Os. To increase that number, flip-chip is one step above wirebond. Fan-out is in the middle in terms of I/Os, while 2.5D/3D is at the high end.

Fig. 1: Package technology vs. application. Source: ASE

Commercialized in the 1960s, flip-chip is still widely used today. In flip-chip, a sea of tiny copper bumps are formed on top of a chip. The device is flipped and mounted on a separate die or board. The bumps land on copper pads, forming an electrical connection.

Flip-chip is used for many package types. At ECTC, for example, JCET described details about its ongoing efforts to develop advanced packages using thin organic substrates and flip-chip.

Still in the works, JCET’s technology enables single- and multi-die packages, including 2.5D-like configurations without an interposer. “The salient feature of an ultra-thin substrate is its thickness, which can be an order of magnitude thinner than normal flip-chip laminate or build-up substrates,” said Nokibul Islam, director of group technology strategy at JCET.

Fan-out, 2.5D/3D and chiplets
After flip-chip, fan-out is next in the I/O hierarchy. Fan-out recently gained attention when Apple began using TSMC’s InFO fan-out package for its iPhones. This package integrates Apple’s application processor and third-party memory in the same unit, enabling more I/Os than other package types.

Amkor, ASE, JCET and others also provide fan-out. Fan-out is used to package chips for automotive, mobile devices and other applications. Fan-out doesn’t require an interposer, making it cheaper than 2.5D/3D.

The technology is classified as a wafer-level package, where the dies are packaged while on a wafer. “In FOWLP, chips are embedded inside epoxy molding compound, and then high-density redistribution layers (RDLs) and solder balls are fabricated on the wafer surface to produce a reconstituted wafer,” said Kim Yess, technology director for the Wafer Level Packaging Materials business unit at Brewer Science. “As the industry pushes fan-out to the limit, stress plays a big factor into it. You have stress and warpage.”

Fan-out is split into two segments, low-density and high-density. Low-density fan-out (less than 500 I/Os) is used for power management ICs, codecs and other devices, while high-density (more than 500 I/Os) is targeted for servers and smartphones.

Going forward, fan-out is extending its reach in both markets. “You are seeing a push to extend fan-out,” said Kim Arnold, executive director for wafer level packaging materials at Brewer Science. “The industry is finding ways for fan-out to offer the performance needed. The industry knows how to run the process. They also know the cost structure.”

At the high end, for example, ASE and TSMC are working on fan-out packages that support HBM, which addresses a big challenge in today’s systems—the memory wall.

In systems, data moves between the processor and memory. But at times this exchange causes latency and power consumption, which is sometimes called the memory wall. DRAM, the main memory in systems, is one of the main culprits. Over the last decade, the data rates for DRAM have fallen behind in memory bandwidth requirements.

One solution is high-bandwidth memory (HBM). Targeted for high-end systems, HBM stacks DRAM dies on top of each other and connects them with TSVs, enabling more I/Os and bandwidth.

Typically, HBM is integrated in a 2.5D package. In 2.5D, dies are stacked or placed side-by-side on top of an interposer, which incorporates TSVs. The interposer acts as the bridge between the chips and a board.

Generally, though, 2.5D with HBM is relegated for high-end applications. The big issue with 2.5D is cost. It’s too expensive for most applications.

To help lower the cost, the industry is working on fan-out packages with HBM. In a paper at ECTC, for example, ASE described a fan-out technology that integrates an ASIC with two HBM2 dies. For this, ASE is using a hybrid fan-out package called Fan Out Chip on Substrate (FoCoS).

ASE’s current FoCoS package is based on a process called chip-first. In contrast, the HBM version of FoCoS is a chip-last process, enabling a 30mm x 30mm package size with 2μm line/space and a 10μm via size. It has four RDL layers with stacked vias.

Fan-out with HBM has several advantages over 2.5D. “The electrical performance is better than a 2.5D interposer solution,” said John Hunt, senior director of engineering at ASE. “You have less insertion loss, better impedance control and lower warpage than 2.5D. It’s a lower cost solution with better electrical performance. The difference is that 2.5D can do finer lines and spaces. But we can route the HBM2 dies with our current 2μm line and space.”

Fig. 2: Critical Signals In An HPC Device Source: ASE

Fan-out with HBM could take share away from 2.5D, but it won’t completely displace it. “When you look at what’s beyond fan-out, you see 2.5D or 3D. You find instances where people require an interposer. They need the performance. The same thing is true for 3D. You have places where 3D performance is required,” Brewer’s Arnold said.

Others also are developing new versions of fan-out. At ECTC, TSMC disclosed details about its next-generation fan-out technology, called 3D-MiM (MUST-in-MUST).

TSMC’s current fan-out technology is based on a package-on-package (PoP) scheme. In PoP, two dies (or more) are housed in the same package and connected using various interconnect technologies.

In contrast, 3D-MiM resembles an embedded die technology. “3D-MiM technology utilizes a more simplified architecture,” said An-Jhih Su, a researcher from TSMC. “First, there are no wafer bumps and flip-chip bonding during the 3D-MiM fan-out integration process, which reduces the assembly complexity and avoids the chip-package-interaction reliability challenges in a flip chip assembly. Second, a much thinner package profile is achieved for improved form factor, thermal, and electrical performance.”

Still in R&D, 3D-MiM consists of a substrate with separate tiers. A die or die stack can be embedded in each tier and connected to the board via a link.

In a three-tier configuration, for example, the top consists of an SoC. The middle tier consists of 8 memory dies, which are embedded and staggered in the substrate. The bottom tier also has 8 memory dies. All told, the package consists of one SoC and 16 memory dies in a package with a footprint of 15 x 15mm2.

Embedded die packaging isn’t new. Generally, the technology presents various manufacturing and cost challenges. Bonding the tiers and aligning the dies are among the challenges.

Fan-out, meanwhile, is moving other directions, as well. For example, after years of R&D, panel-level fan-out technology is finally beginning to ramp up in the market, at least in limited volumes for a few vendors.

There are various challenges here, including lack of panel standards and the ecosystem. “There are a lot of new materials and equipment with a focus on panel-level processing entering the market,” said Tanja Braun, deputy group manager at Fraunhofer.

Chiplet mania
After years of modest success in developing 3D-IC packages, the industry is launching new versions of the technology. In 3D-IC, the idea is to stack logic dies on each other or memory on logic. The dies are connected using an active interposer.

Still in the early stages of development, chiplets are another form of 3D-ICs. There are various ways to integrate chiplets. For example, instead of a big SoC in a package, you break the device up into smaller dies and connect them.

“Chiplets enable heterogeneous integration of CMOS with non-CMOS devices,” said Ajit Paranjpe, CTO at Veeco. “For example, at ECTC, a few papers highlighted the benefits of moving the voltage regulators off the main CMOS die, especially for server chips that have a sea of cores and require several hundred watts of power. Moving the voltage regulators off-chip can reduce the die size of the expensive leading-edge CMOS (i.e. 10nm and 7nm) by 20% to 30%.”

The idea of putting together different modules like LEGOs has been talked about for years. So far, only Marvell has used this concept commercially, and that was exclusively for its own chips based on what it calls a modular chip (MoChi) architecture.

Now, government agencies, industry groups and companies are jumping on the chiplet bandwagon. The latest is STMicroelectronics and Leti, which jointly presented a paper at ECTC on a 3D system architecture using chiplets.

STMicroelectronics and Leti developed six multiprocessor chiplets based on a 28nm FD-SOI technology. The devices were placed on a 65nm active interposer and connected using copper pillars.

“These copper pillars offer a large chiplet-to-chiplet communication bandwidth through the interposer, with a reduced impact on the chiplet floorplan,” said Perceval Coudrain, a researcher at CEA-Leti. “This object integrates a total of 96 cores, offering a low-power computing fabric with a cache-coherent architecture and wide voltage ranges.”

Meanwhile, TSMC described its latest efforts in the area, which it calls System on Integrated Chips (SoIC) for 3D heterogeneous integration.

TSMC demonstrated the SoIC concept for a fan-out package. In TSMC’s InFO package, a memory die is on top, while a single SoC is situated on the bottom.

In TSMC’s SoIC technology, there could be three smaller SoCs or chiplets instead of one big SoC in the package. One SoC is on top and two are on the bottom, which are joined using a bonding process.

The idea is to break the big SoC into smaller chiplets, which presumably have a lower cost and better yield than a monolithic die. “Compared to the typical 3D-IC PoP, the SoIC-embedded InFO_PoP offers higher interconnect I/O bonding density, lower power consumption and a thinner package profile,” said TSMC’s F.C. Chen in a paper.

Needless to say, the industry faces some challenges with chiplets. “Given all the advantages, we would expect chiplet adoption to happen, but the primary question is at what pace? And this will be driven primarily by cost. So it will most likely be implemented primarily in high-end applications initially, and adopted more generally as costs are driven down,” said Warren Flack, vice president of lithography applications at Veeco’s Ultratech Business unit.

Integrating chiplets into a package is easier said than done. “In general, the more individual chiplets that are used to complete a single package, the greater the lithography challenges. This involves interconnect overlay, TSV processes for stacked chip interconnect, and productivity (system throughput) to provide the needed technical solution at affordable costs,” Flack said.

There are other challenges. “Smaller metal features for 3D stacking also drives the detection of smaller defects and control of tighter dimensions,” said Stephen Hiebert, senior director of marketing at KLA. “For heterogeneous integration, the quality requirements for each device being integrated are increasing rapidly. More demanding requirements for accurate die screening are emerging as the number of and the value of ICs integrated into a system-in-package increases. For wafer-level and die-level inspections, small or subtle defects that may have been previously acceptable are becoming unacceptable when these die get incorporated in a complex, multi-device package. One bad die for system-in-package can kill the entire heterogenous package.”

Conclusion
For the next wave of devices, IDMs and design houses have several options. Scaling remains on the list, but it’s no longer the only option.

“From an economic standpoint, how many companies can afford silicon at the bleeding edge nowadays? That number is shrinking. For the very high-performance markets, there is always going to be that need,” said Walter Ng, vice president of business management at UMC. “But everyone else has slowed down quite a bit. You can look at the need for advanced packaging in multiple places.”

Related Stories
Advanced Packaging Options Increase
But putting multiple chips into a package is still difficult and expensive.
Sidestepping Moore’s Law
Why multi-die solutions are getting so much attention these days.
Focus Shifting From 2.5D To Fan-Outs For Lower Cost
Interposer costs continue to limit adoption of fastest and lowest-power options, but that’s about to change.
Chiplet Momentum Builds, Despite Tradeoffs
Pre-characterized tiles can move Moore’s Law forward, but it’s not as easy as it looks.
Moore’s Law Now Requires Advanced Packaging
Shrinking features isn’t enough anymore. The big challenge now is how to achieve economies of scale and minimize complex integration issues.



3 comments

Semi_Eng_Fan_4321 says:

“Still in R&D, 3D-MiM consists of a substrate with separate tiers. A die or die stack can be embedded in each tier and connected to the board via a link.”

What kind of “link” is this? They say no flip-chip or wafer bumping… Is it Cu Pillar? Don’t they still need bumps? I seem to be misunderstanding something…

Mark LaPedus says:

My apologies here. Here’s what TSMC is doing: “Each pre-fabricated and tested memory module acts as a known good die to be sequentially integrated to the system by WLSI process,” according to the TSMC paper.

Here’s what you are asking: “Interconnects are extended from I/Os of memory module to I/Os of SoC chip through vertical via and horizontal redistribution layers. BGA or C4 solder bumps are then placed on top of redistribution layer to complete the wafer fabrication process,” according to the paper.

Dr. Dev Gupta says:

Mark, in your articles on advanced packaging you seem to ignore flip chip while discussing its more recent applications like 2.5D or HBM stacks. While I am happy to find from your article above that you are no longer doing so you have got the history of flip chip wrong.

While the original solder bumped version of FC has indeed been around since the ’60s, the u Pillar Sn capped metal bumps along with the in situ thermocompression bonding is ‘only’ a quarter century old, I developed the u Pillar FC technology ( Bump geometry, bumping process, substrates, bonding .. ) at Motorola along w/ all other types of Flip Chip technologies in current use. By ’95 we had also developed a Robotic Line in PHX to do HVM assy. of uPillar ( @ pitch 40 um ) FC by TCB for GaAs PAs used in the Motorola Flip Phones. Replacing WB by the uP FC technology made GaAs affordable for Mobile Phones and opened the door to higher Bandwidths thus enabling Net Surfing and Video. The larger solder capped Cu pillar bumps that Intel has used since 2005 have similar antecedents

Glad to find from your article that someone at JCET ( which used to be STATS ) have picked up on the FC on thin substrates idea, because at a Conf. a few years back one of their people ( w/ just a MS in MechE ) was making some outrageous apples vs oranges comparison between the electrical performance of FC vs FO WLPs.

So don’t get too carried away w/ all the claims about FO WLPs because FC on Coreless Substrates ( which I developed at Intel over 20 years ago ) avoids most of the parasitics of a substrate w/ thick Core but does NOT have all those Die Shift issues nor does it require the kludgy point to point corrective scheme ( which was OK for the original application i,e. Solar Panels w/ low I/O density ) that actually requires some sort of Cu uPillar for the Vision system to work.

After a fair comparison the main advantage of FO WLPs over FC is mainly COMMERCIAL, the Foundries or OSATs do not have to fork out any of their PROFIT to Substrate vendors. Till their last SoC ( 855 ), QCOMM has resisted the urge to switch from MCEP ( which starts w/ the SoC on a Coreless FC ) but for the 865 they have decided to go to the Big Bad Wolf ( even though they are not yet using EUV for their ‘7 nm’ ) so the scenario of a Captive Customer ( Apple ) being forced to choose FO WLP may play out once again.

With so many new players in Adv. Pkg. there is bound to be some claims and counterclaims but it is getting ridiculous. I happen to chair the Packaging chapter of the IEEE IRDS Roadmap ( the original, NOT the new fangled HIR, there is nothing NEW about the term Heterogeneous Integr. either ) and in our next release we will try to clear up all the confusion from a theoretical standpoint

Cheers

Leave a Reply


(Note: This name will be displayed publicly)