Bridges Vs. Interposers

Momentum growing for low-cost alternatives to interposers as a way of reducing overall development costs.

popularity

The number of technology options continue to grow for advanced packaging, including new and different ways to incorporate so-called silicon bridges in products.

For some time, Intel has offered a silicon bridge technology called Embedded Multi-die Interconnect Bridge (EMIB), which makes use of a tiny piece of silicon with routing layers that connects one chip to another in an IC package. In addition, Amkor, Imec and Samsung are separately developing silicon bridge technologies for packaging.

Each technology is slightly different, but the idea is similar. Silicon bridges serve as an in-package interconnect for multi-die packages. They also are positioned as an alternative to 2.5D packages using silicon interposers. Like 2.5D, silicon bridges provide more I/Os in packages, which can address the memory bandwidth bottleneck challenges in systems.

In 2.5D, dies are stacked or placed side-by-side on top of an interposer, which incorporates through-silicon vias (TSVs). The interposer acts as the bridge between the chips and a board, which in turn provides more I/O and bandwidth in packages.


Fig. 1: EMIB implementation (silicon bridge). Source: Intel


Fig. 2: FPGA + HBM in 2.5D package with interposer. Source: Xilinx

The problem is the cost of the interposer is relatively high, which limits 2.5D to high-end applications. So for some time, the industry has been developing a number of new and advanced packaging types in an effort to fill the gap for both the midrange and high-end markets. The dizzying array of packaging options include fan-out, system-in-package (SIP), silicon bridges and even some new interposer technologies.

No one package type can meet all requirements, so customers will likely use many technologies. “There are many options that can meet the needs,” said Jan Vardaman, president of TechSearch International. “Each company will select the options that best fit their supply chain.”

Meanwhile, silicon bridge technology, a relative newcomer in the landscape, is gaining more attention. This technology isn’t expected to displace the other packaging types, although Intel’s EMIB and potentially other solutions are intriguing. “EMIB is an alternative solution to a silicon interposer for heterogeneous integration. It provides a fairly similar bandwidth,” said Babak Sabi, vice president and general manager of assembly test technology development at Intel. “The attractiveness of EMIB, at least for us, is that we only use silicon in the areas where we are trying to connect the two dies together. So it’s a fairly small silicon area that is on a substrate, which has all of the chips on it.”

By heterogeneous integration, Sabi is referring to the concept of putting multiple and different chips in the same package. The term heterogeneous integration has other implications, as well. For example, chipmakers continue to use traditional IC scaling to move from one node to the next, but the benefits are shrinking and the costs are escalating. Another way to get the benefits of scaling is by moving to more advanced forms of heterogeneous integration. Intel sometimes calls this package scaling.

“For many years, package scaling didn’t matter. It didn’t give you anything,” Sabi said. “But with heterogeneous integration, package scaling is going to matter a lot. We are just at the beginning of it. It does give you performance improvement and some functions that were not available previously.”

The challenges
Heterogeneous integration can be used in different ways in various markets. It is also used to address some major challenges in systems.

For example, the technology is a key enabler in high-end servers. For some time, systems have been struggling to keep pace amid the explosion of data in high-end environments. “Next-generation platforms are evolving rapidly to keep pace with emerging system trends driven by an explosion of applications, such as data center capabilities, Internet of Things (IoT), 400G to terabit networking, optical transport, 5G wireless and 8K video,” said Manish Deo, senior product marketing manager at the Programmable Solutions group at Intel, in a recent paper. “Next-generation data center workloads demand increasingly higher computational capabilities, flexibility, and power efficiencies. (But the demands are) outstripping the capabilities of today’s general-purpose servers. Simply put, the devices used to build these next-generation platforms must do more, be faster, take less printed circuit board (PCB) real estate, and burn less energy, all at the same time.”

In response to the challenges, the IC industry has developed a new wave of fast FPGAs, GPUs and processors at the 16nm/14nm logic nodes with 10nm/7nm technologies in the works.

That still doesn’t address a big and ongoing challenge in systems—a memory bandwidth bottleneck. The main culprit there is DRAM, which serves as the main memory in systems. Over the last decade, the data rates for DRAM have fallen behind in terms of memory bandwidth requirements.

Today, the current DRAM interface technology, DDR4, provides less than a 2X boost in data rates compared to the last version. Yet, in just one example, Ethernet port speeds have increased by 10X over the last decade, according to Xilinx.

There is a solution. The industry has been shipping a 3D DRAM technology called high bandwidth memory (HBM). The latest versions are called HBM2.

Targeted for high-end systems, HBM stacks DRAM dies on top of each other, enabling more I/Os. For example, Samsung’s latest HBM2 technology consists of eight 8Gbit DRAM dies, which are stacked and connected using 5,000 TSVs. In total, it enables 307GBps of data bandwidth. In comparison, the maximum bandwidth of four DDR4 DIMMs is 85.2GBps, according to Xilinx.

HBM is just one way to boost the performance of DRAMs in packages. “We are now beginning to see increased participation of the DRAM memory segment with various advanced packaging platforms. Leading-edge DRAM providers are beginning to transition from wire-bond to copper pillar solutions. We are also witnessing an increased adoption of TSV for high bandwidth memory solutions,” said Manish Ranjan, managing director of Lam Research’s Advanced Packaging Customer Operations.

This is only part of the equation. Another challenge is to integrate these devices on a board. There are several options on the table. Among them are:

  • Place discrete chips on a board.
  • Integrate the chips in an advanced package. The options include 2.5D, bridges, fan-out and multi-chip modules (MCMs).

Generally, the industry is moving from the discrete solution towards integrating multiple dies in the same package. Placing discrete chips on a board takes up too much space and is inefficient in terms of moving data from one device to another.


Fig. 3: Discrete Component Integration using PCB. Source: Intel

By putting multiple chips in a package, OEMs can achieve more functionality in a smaller form-factor at a better cost. The idea is to bring the chips closer together, enabling more bandwidth.

So, what’s the best packaging solution? There is no right answer. “The choice of these platforms depends on the end product requirements,” Ranjan said. “Several different technology platforms, such as multi-chip packages, silicon interposers and multi-die interconnect bridges, provide solution options for heterogeneous integration.”

The options
For years, OEMs have used MCMs in systems. They integrate several chips in a module. Traditional MCMs are still used, but they tend to be large and bulky.

Then, for some time, the industry has been shipping 2.5D technology with various types of interposers. In one configuration, an FPGA and two HBM2 packages are stacked side-by-side on top of an interposer.


Fig. 4: 2.5D with TSVs and high-bandwidth memory. Source: Samsung

Typically, 2.5D is used for high-end ASICs, FPGAs, GPUs and memory cubes. “(2.5D/3D) seems to be taking off,” said Gary Patton, CTO at GlobalFoundries, in a recent interview. “If I look at our ASIC design wins over the last year on 14nm, roughly 40% of them have been more than just a wafer. They have included some level of advanced packaging like 2.5D and 3D.”

But 2.5D/3D technologies are relatively expensive due in part to the cost of the interposer, which limits the market to high-end applications. Plus, there are some die size issues. For example, a 2.5D-based FPGA has a die size around 800mm². This is close to the maximum of the reticle field size in a lithography system. The maximum die size is 30mm x 30mm.

If the package exceeds those specs, it may require a process called reticle stitching. “If you look at packages, they are different and large in size. It may not fit on that 30mm by 30mm reticle size that you have,” Intel’s Sabi said. “That means you have to connect two reticles together. There is a way of doing it, and it’s a little expensive and hard to do.”

Looking to solve these and other issues, Intel some time ago introduced EMIB, a silicon bridge technology. With EMIB, a tiny piece of silicon or a bridge is embedded in a substrate. The bridge, which has routing layers, serves as the interconnect between two dies in a package.

In one example, Intel sells a 14nm FPGA. It has three transceivers on each end for a total of six. Each transceiver is connected to the FPGA using EMIB. In this case, the FPGA uses six bridges.

Using EMIB, the FPGA could also support HBM. “It takes a lot less silicon area (than an interposer),” Sabi said. “You can put as many bridges as you wish on a substrate. That’s another advantage of EMIB. It doesn’t have any reticle size limitation like a silicon interposer.”

EMIB itself is a form of embedded die packaging. The technology uses a substrate with redistribution layers (RDLs). RDLs are the copper metal connection lines or traces that electrically connect one part of the package to another. RDLs are measured by line and space, which refer to the width and pitch of a metal trace.

In simple terms, a silicon bridge is embedded in the core of a substrate. Then, in a separate process, tiny bumps or pillars are formed on the dies. Using a flip-chip process, the dies are flipped and connected on the substrate.

“It fits into a normal manufacturing process. Once we get the package, the EMIB bridges are embedded. Then it goes through assembly and test,” he said. “Of course, the pitches are a little bit different from normal assembly. It’s a much smaller pitch.”

The EMIB-based silicon itself is not expensive, although the cost of the entire process is difficult to quantify, according to officials from Intel. Plus, EMIB is a proprietary technology that is only available at Intel.

Generally, embedded die packaging is challenging, as you need to align the bridge and die with little margin for error. “One of the main challenges for the silicon bridge solutions is the yield of the interposer containing the bridge. In any solution, yield is a critical factor,” TechSearch’s Vardaman said.

Silicon bridges are suitable for some apps, but they won’t dominate the landscape. Customers will likely use a range of options. “We don’t believe that silicon bridge solutions will displace silicon interposers or fan-out on substrate,” Vardaman said.

Still, others are moving ahead with the technology. For example, Samsung is developing what it calls an RDL Bridge. It’s an RDL-layer interposer to bridge logic to the memory.

Then, in R&D, Imec is developing its own silicon bridge technology with a twist—it’s not only an alternative to 2.5D, but it also enables a high-density, fan-out package.

Imec’s technology is similar to EMIB. The bridge, which has routing layers, connects one die to another. It also undergoes a flip-chip process. The technology enables 8 GBps per channel.

In the manufacturing flow, Imec’s process is different. Instead of embedding the bridge in the substrate, the assembly is done on a temporary wafer carrier. The dies are placed face up and then you assemble the bridge on the carrier.

This is a simple explanation of the flow. “It’s not package-on-package (PoP). That’s like 400μm pitch. We are doing flip-chip at 40μm pitch,” said Eric Beyne, a fellow and program director for 3D system integration at Imec. “This is 100 times denser than PoP. The channels allow for even 20μm. It’s a very short interconnect for a high bandwidth connection.”

Meanwhile, Amkor is also working on the technology in R&D, although there are still some issues that need to be addressed. “Amkor has a version of a bridge technology that we are incubating,” said Ron Huemoeller, vice president and head of corporate R&D at Amkor. “The complexity of the interconnect remains the technical challenge, including package interaction, interconnection strategy and test strategy.”

Plus, there are some supply chain issues, which resemble the problems with 2.5D. “Logistically, die ownership needs to be understood,” Huemoeller said. “Timing, supply chain maturity and cooperation are paramount to enabling this advanced form of MCM/heterogeneous integration.”

There are other issues. “Silicon bridge is a substrate/PCB-based technology; therefore, an engagement with substrate suppliers are required for this solution. I understand there are only a few substrate suppliers who are able to handle this technology due to the precise position control that is required for embedding EMIB. In terms of quality control, customers may choose to rely on major substrate suppliers who can meet their expectations,” said Seung Wook Yoon, director of group technology strategy for the JCET Group. “Silicon bridge is OK for fine-pitch flip-chip applications, but I do see some challenges in terms of ultra-fine pitch for HBM-2/2E or 3 with finer microbumps of less than 40um, primarily due to EMIB embedding tolerance. HBM-2, 2E and 3 requires more I/O and a finer microbump pitch due to higher I/O requirements to meet performance expectations.”

In addition, the silicon bridge model isn’t as straightforward as other technologies. “It would be simple in terms of a business model and supply chain, if it was similar to current the flip-chip business model,” Yoon said.

Besides 2.5D and bridges, there are other approaches as well, including fan-out. In fan-out, the dies are packaged on the wafer. It doesn’t require an interposer. “In fan-out, you expand the available area of the package,” said John Hunt, senior director of engineering at ASE.

At the high end, the industry is developing high-density fan-out. This is defined as a package with more than 500 I/Os and less than 8μm line and space.


Fig. 5: ASE’s FOCoS fan-out package. Source: TechSearch International

Today’s servers are using fan-out. Now, packaging houses are developing fan-out that can support HBM, but there are still some challenges. “The density of HBM is more eloquently addressed with silicon-level interconnections at sub-micron than with organic-based RDL,” Amkor’s Huemoeller said. “It is possible, with enough layers allocated for routing and via drops, to connect a high-bandwidth memory module to a logic chip through organic-based RDL. However, electrical properties are diminished and reliability challenges diminish the benefit of moving off-silicon based interposers at the moment. The reliability challenges can be overcome, but more engineering is still required.”

What’s next?
Heterogeneous integration is helping to solve many challenges in systems, but it’s important for another reason—it’s one way to scale a device.

For years, chipmakers introduced a new process technology at a given node every 18 to 24 months as a means to boost performance and lower the cost per transistor in a chip. At advanced nodes, though, cost and complexity are skyrocketing, so the cadence for a fully scaled node has extended to 2.5 years or longer. In addition, fewer foundry customers can afford to move to advanced nodes.

IC design costs are the big issue. Generally, IC design costs have jumped from $51.3 million for a 28nm planar device to $297.8 million for a 7nm chip, according to International Business Strategies (IBS). At 5nm, the IC design costs are expected to jump to $542.2 million, according to IBS.

For that and other reasons, a large number of IC vendors are staying at 28nm and above. Analog, mixed-signal and RF don’t require advanced nodes.

Still, there are a number of IC suppliers that are moving to advanced nodes, although the benefits of scaling are arguably shrinking. That’s where heterogeneous integration fits in, as it can be used for chips at both advanced and mature nodes. Bridges, fan-outs, 2.5D and other packages are all viable here.

“There are many different ways of realizing heterogeneous integration. It really depends on what it is you are trying to optimize and what’s available,” Intel’s Sabi said. “This is what I call a transition point in heterogeneous integration. We are seeing a whole bunch of products in the market. We are just at the beginning of a change.”

In semiconductors, chipmakers have found ways to scale the feature sizes of the transistor, enabling the industry to move from one node to the next.

In packaging, the feature sizes are on a much larger scale. But you can still scale a device by reducing certain parts of the package. “Anytime you scale bump pitch, you can potentially make the silicon a little smaller. You can improve the linewidths. You can improve via density. You can have more I/Os per millimeter. You can improve your bandwidth,” Sabi said.

Indeed, one way to scale the package is to incorporate finer RDLs. Another way is to use fine-pitch bumps or copper pillars on the die. For example, copper pillars are migrating from 50μm and 100μm to 10μm to 20μm in diameter, according to Dow Electronic Materials.

Clearly, heterogeneous integration is happening in various markets today, although the volumes are still low. At advanced nodes, though, it is expected to become more prevalent, so customers will need to get ready sooner than later.

“It’s sort of like silicon. Whoever can get to 5nm technology faster is going to get the benefit. The same is true with package scaling,” Sabi added.

Related Stories
Choosing The Right Interconnect
Toward High-End Fan-Outs
Extending The IC Roadmap
Advanced Packaging Confusion



4 comments

Gary Huang says:

Besides bridges-like solution (2D Die-2-Die interconnect), HI still need the through-interconnect in vertical axis.

Dr. Dev Gupta says:

Traditionally OSATs have not done much Path finding of new Adv. Packaging technologies on their own but just followed the lead of IDMs in establishing new Adv. Pkg. technologies e,g.. various Flip Chip and putting them into HVM. The OSATs have then largely replicated these established technologies and well defined designs / flows and offered them at lowered prices to their Fabless customers by using not just low cost labor ( though even in Taiwan it is only 15 % of the total cost ) & infrastructure in LCMRs but mostly by accepting lower profit margins ( 15 vs 40 % ).

Major offshore Foundries used to Profit Margin of > 30 % have jumped into Adv. Packaging as the skills and investment required to replicate BEOL geometries used in Dense Off Chip Integration ( < 2 um ) were not available to OSATs. Providing these new services allows the Foundries to improve yields of immature nodes ( as for FPGA ), lock in large Customers ( as w/ FO WLPs for APs from a particular System house ).

For OSATs the major attraction of FO WLPs over Flip Chip is not technical but commercial as they do not have to fork out a profit margin to their Substrate suppliers. But even FO WLP is too expensive an investment for all but the major OSATs.

When an EMIB style dense Bridge is attached to a Face Up FO WLP ( before Molding which shifts dies and reduces placement accuracy / limits shrink possible in subsequent RDL ), we are no longer talking about a WLP but a hybrid CORELESS FLIP CHIP process

W/ so many new Players in Adv. Packaging, development activities are chasing unnecessarily high complexity – high cost solutions w/ low returns to both providers ( Foundries now doing Pkg., OSATs ) and Users ( those who want to focus on lower cost, higher volume segments e.g. Edge devices for AI ).

For the OSATs It is very important to understand the fundamentals and avoid dead end activities and investments.

Watch out for the Packaging Integration chapter in IEEE – IRDS Semiconductor & systems Roadmap that will be published shortly.

Chang Janicki says:

One thing I would like to say is that before purchasing more laptop memory, consider the machine within which it could well be installed. If the machine is actually running Windows XP, for instance, the memory threshold is 3.25GB. Setting up greater than this would merely constitute some sort of waste. Make certain that one’s motherboard can handle the particular upgrade volume, as well. Interesting blog post.

Tanj Bennett says:

Other considerations include heat dissipation and lead time added for the packaging work.

Packages including components generating significant heat cause an added problem if other chips reduce thermal conductivity, or if the heat reduces performance of DRAM and other elements which work better at lower temperature.

The need for thermal planning, signal integrity, and layout changes in chips (which might have primarily been designed for other packages) can add many months to timeline for design, sampling, test and validation.

Meanwhile, the ecosystem for just putting the chips on a conventional board will present competition, or be used by the competition. So the primary use cases are likely to be where the advantage of packaging is compelling, for example mobile, or special functions like HBM. The general case will require practice, tools, and a reliable supply chain with second sources.

It is going to take time for 2.5D packaging to be widely used.

Leave a Reply


(Note: This name will be displayed publicly)