Chiplet Momentum Rising

Companies and organizations racing to define interfaces and standards as SoC scaling costs continue to rise.

popularity

The chiplet model is gaining momentum as an alternative to developing monolithic ASIC designs, which are becoming more complex and expensive at each node.

Several companies and industry groups are rallying around the chiplet model, including AMD, Intel and TSMC. In addition, there is a new U.S. Department of Defense (DoD) initiative. The goal is to speed up time to market and reduce the cost by integrating pre-developed and pre-tested chiplets in a package. The problem is there are no standards and several confusing options, including some proprietary approaches.

In the chiplet model, a chipmaker may have a menu of modular dies, or chiplets, in a library. These chiplets can have different functions and could be developed at various process nodes. But because they are in a package or system rather than on a single die, customers could mix-and-match the chiplets and connect them using a die-to-die interconnect scheme.

In contrast, IC scaling, the traditional way of advancing a design, relies on shrinking different chip functions at each node and packing them onto a monolithic die. But IC scaling is becoming too expensive for many, and the performance and power benefits are diminishing at each node.

While scaling remains an option for new designs, many are searching for alternatives. One way to get the benefits of scaling is to integrate heterogeneous dies in an advanced package. Chiplets is another form of heterogenous integration.

Chiplets aren’t new. For years, the industry has talked about this idea and some have developed products. In 2019, though, several industry heavyweights, such as AMD, Intel and TSMC, gave the technology boost, if not more credibility, by introducing processors or technologies based on the concept.

Not all designs are moving toward chiplets, and they are not required for all applications. Conventional packaging will continue to dominate the landscape. But chiplets are gaining steam for select applications, with more announcements in the works. “We are at the start of a new era with chiplets,” said Jan Vardaman, president of TechSearch International. “Intel announced a product that will be on the market at the end of 2020. Maybe more announcements will follow. AMD is shipping a chiplet-based product using an organic substrate.”

Here are some of the developments in this arena:

  • AMD, Intel, TSMC and others this year have rolled out or will unveil new chip designs and technologies based on chiplets.
  • The DoD has launched a new program, which is a follow-on to DARPA’s chiplets effort. The DoD program will continue to develop chiplets. It also hopes to establish a commercial entity to provide chiplet-enabled technology capabilities for U.S. government agencies.
  • The chip industry hopes to develop a standard die-to-die interconnect/interface for chiplets, but that won’t happen anytime soon. New interface technologies are emerging, including one derived from high-bandwidth memory (HBM), dubbed OpenHBI.


Fig. 1: A 6 chiplet design with 96 cores Source: Leti

Why chiplets?
Throughout most of the course of Moore’s Law, chipmakers have developed a new process with more transistor density every 18 to 24 months. So in a process at each generation or node, a device maker could cram more and smaller transistors on a die at lower costs.

This chip scaling formula worked for many IC suppliers until 16nm/14nm about 2013. At the time, traditional planar transistors hit the wall, prompting many to migrate to finFETs transistors. Chips based on finFETs are faster, and the current leakage is lower. But finFETs are also more expensive, and design and R&D costs have skyrocketed. So the cadence for a fully scaled node has extended from 18 months to 2.5 years or longer.

Today, some companies are shipping chips using finFETs at 7nm, with 5nm in R&D, but their design and manufacturing costs are astronomical. “At bleeding-edge nodes, some of these chips are huge. The reticle field, in some cases, can maybe only sustain a handful of these chips. In some cases, the yields are not very good,” said Walter Ng, vice president of business development at UMC.

Still, some will continue to develop chips at advanced nodes. But many are looking at alternatives, including advanced packaging. Advanced packaging isn’t new, either. Assembling dies in a package is one way to advance a design. But advanced packaging is mainly used in niche markets due to cost.

Still, as the costs skyrocket for advanced nodes, packaging is becoming a viable option and a differentiator. “At one time, the real system customization happened at board assembly. The packages were all pretty standard,” said Rich Rice, senior vice president of business development at ASE. “Now, what you are seeing is that the package is being customized to the level it doesn’t look the same even on the outside. Many are trying to integrate more and more of the system or subsystem-level functionality into them.”

For years, packaging houses offered several different advanced package types, such 2.5D, 3D and fan-out. The decision to use a package type depends on the requirements.

Fan-out is one option. In one example of fan-out, a DRAM die is stacked on a logic chip in a package.

In 2.5D, dies are stacked or placed side-by-side on an interposer. The interposer incorporates through-silicon vias (TSVs). The interposer acts as the bridge between the chips and a board, which provides more I/Os and bandwidth.

HBM is one example of 3D. In HBM, DRAM dies are stacked on each other and connected using TSVs. There are other examples, as well.

Chiplets is yet another option. Instead of cramming everything on a large die, the idea is to break up the die into smaller dies and integrate them in a package. The dies or chiplets are closer together, enabling lower latencies. This supposedly reduces the cost and provides better yields.

A chiplet isn’t a package type per se. Chiplets could be integrated in existing package types, such as 2.5D, 3D and fan-out. Some may develop new architectures using chiplets.

“It’s an architecture methodology. There is no preset defined chiplet architecture. The chiplet approach will be tailored to what the requirements are for that particular product,” UMC’s Ng explained. “The key is how you assemble and interconnect the chiplets. There are a lot of considerations in play.”

Existing packages won’t go away because of chiplets. Today’s advanced package types will remain viable options, along with the chiplet model. “We’re seeing a lot of interest on both,” Ng said. “We’ve seen interest in our interposer solutions. Traditionally, the interest has been in high-end graphics. Now, we are seeing more interest in performance enterprise solutions. We’re also seeing interest in non-traditional areas.”

Several companies already have developed multi-die designs using chiplets, but developing these products present some challenges.

First, there is no single standard die-to-die interconnect or interface solution in the market. Today, there are at least two chiplet die-to-die interface technologies—Intel’s Advanced Interface Bus (AIB) and the Optical Internetworking Forum’s CEI-112G-XSR scheme.

In addition, the Open Domain-Specific Architecture (ODSA) subgroup, an industry group, is defining two other interfaces — Bunch of Wires (BoW) and OpenHBI. There also are some proprietary solutions. Each technology has its own merits, but it’s unclear if the industry will rally around one standard.

“The commercial standardization of this will be difficult,” ASE’s Rice said. “Whether it becomes a wide open commercial model remains to be seen. It’s going to be difficult.”

That’s not the only issue with chiplets. Assembly issues, design tool support, test and yields are among the challenges here.

“As you integrate more and more dies, one of the biggest challenges is the known-good-die strategy,” said Preeti Chauhan, technical program manager at Google, in a recent presentation at a MEPTEC event. “We take an interposer and try to put a good functional die on top of that. And then we try to take more of those interposer stacks and connect them. How do we make sure that those stacks are actually good? When we test it, it might fail because of one particular stack. Known-good-die is going to be very critical.”

Chiplets also require a sound process control strategy. Otherwise, yields will suffer. “Whether it’s modular chiplets that are assembled in a package with die-to-die interconnects or integrating multiple dies in an advanced package, ensuring there are inspection and metrology processes in place to ensure yields is critical,” said John Hoffman, an engineering manager at CyberOptics.

Test is also key. “In a heterogenous integrated system, the impact of composite yield fallout due to a single chiplet is creating new performance imperatives for wafer test in terms of test complexity and coverage. From a test perspective, making chiplets a mainstream technology depends on ensuring ‘good enough die’ at a reasonable test cost,” said Amy Leong, senior vice president at FormFactor. “Wafer-level test plays a critical and intricate role in the chiplet manufacturing process. Take the case of HBM. It enables early identification of defective DRAM and logic dies so that they can be removed before the complex and expensive stacking stage. Further testing of the post-stack wafer ensures the full functionality of completed stacks before dicing them into stand-alone assemblies. Therefore, a test strategy to balance the test cost and the cost of undetected yield fallout is needed to bring heterogenous integration to high-volume production.”

DoD chiplets
The chiplet model emerged in 2015, when Marvell introduced its modular chip (MoChi) architecture. Using Kandou’s bus interface, MoChi is used for Marvell’s own products.

Since then, several companies have been developing devices based on chiplets. The defense community is also interested.

In 2017, the U.S. Defense Advanced Research Projects Agency (DARPA), part of the U.S. DoD, launched its chiplet program, dubbed Common Heterogeneous Integration and IP Reuse Strategies (CHIPS).

DARPA hoped to drive standards and a new ecosystem for the CHIPS program, which is still ongoing. Boeing, Cadence, Intel, Lockheed, Micron, Northrop Grumman, Synopsys and others are part of CHIPS. Intel licensed its AIB technology to the group.

The DoD is interested in chiplets for several reasons. It has long recognized that chip technology is essential for U.S. military superiority.

The defense community requires advanced chips, but the volumes are typically low. So the defense community has little leverage with capacity and pricing at the foundries. Plus, the most advanced chips are manufactured by foundries outside the U.S. The defense community uses non-U.S. foundries, but it prefers to procure chips onshore.

“It’s becoming much harder for DoD to get access to custom state-of the-art silicon. It’s such an expensive industry now. Making custom ASICs at state-of-the-art nodes can be hundreds of millions of dollars,” said Brett Hamilton, distinguished scientist for Trusted Microelectronics at the Naval Surface Warfare Center, Crane Division (NSWC Crane), a naval laboratory and a field activity of the Naval Sea Systems Command.

For the DoD, the chiplet model is one way to develop chip architectures at lower price points. “The DARPA CHIPS program is a research and development program to further the work on the proof-of-concept of being able to do heterogeneous integration and advanced packaging with multiple chiplets for enabling DoD-specific applications. What makes that appealing to the DoD is that it gives us the ability to leverage the best state-of-the-art commercial technologies like FPGAs, processors and AI chips,” Hamilton said.

Late last year, the DoD launched a new, follow-on program to CHIPS, which called the SOTA Heterogeneous Integration Prototype (SHIP) program. Hamilton is the principal technical lead for the SHIP project.

Recently, NSWC Crane awarded multiple contracts to select companies in the SHIP program, namely GE, Intel, Keysight, Northrop Grumman, Qorvo and Xilinx.

Like CHIPS, the SHIP program wants to establish chiplet interface standards and enable the assembly of systems from modular IP blocks.

“SHIP is leveraging what the DARPA CHIPS program is doing. They are the pipe cleaner for this. They are still ongoing, so we continue to work closely with them to mature and transition those products that are already in the queue,” Hamilton said. “The SHIP program will be focusing more on the actual capability to produce them at a higher volume. CHIPS is a good proof-of-concept. Now we need to establish the capability to do that for volumes that will support DoD.”

Eventually, the program hopes to establish a standalone commercial entity that would provide access to the technology for U.S. government agencies. “We envision SHIP being a commercially owned and operated capability,” Hamilton said. “I can’t get into the status of specific SHIP performers, but the idea is that it be an available capability that the defense industrial base can go to directly, just like they would to design a custom ASIC. If you want to design an ASIC, for example, you would go through one of the foundries. It will be a similar process to this.”

The standalone entity would also address security concerns. “Now, we want to go to the next step and develop an ITAR and secure manufacturing facility where we can assemble parts based on multiple chiplets for specific applications to meet for DoD-unique requirements,” he said.

Even with SHIP, the defense community can continue to develop advanced designs using traditional approaches at U.S. and non-U.S. foundries. SHIP gives them another option.

Commercial chiplets, standards
Chiplets are also heating up in the commercial market. Last year, for example, Intel unveiled Foveros, a methodology that takes IP blocks and integrates them in 3D-like architectures.

Using this technology, Intel introduced a 3D CPU platform, code-named “Lakefield.” This combines a 10nm processor core with four of 22nm processor cores in a package.

In R&D, Intel is developing other products based on chiplets, including a GPU. “The future is about specialization at scale with advanced packaging and interoperable chiplets,” said Ramune Nagisetty, director of process and product integration at Intel. “And then you have specialized nodes for specific functionality like power delivery, memory, or specific types of accelerators like GPUs.”

AMD also has introduced multi-die processors based on this concept. Both AMD and Intel will continue to develop chips at advance nodes. Of course, not all require advanced nodes or chiplets.

Meanwhile, foundries are also pursuing various strategies in the arena. For example, TSMC is working on a technology called System on Integrated Chips (SoIC). SoIC paves the way towards integrating smaller chips with different process nodes in a package.

SoIC utilizes advanced chip stacking techniques, enabling customers to develop 3D-like architectures. The stacking techniques is done using wafer bonding, which can bond two wafers together or a chip to a wafer.

GlobalFoundries, UMC and others are also developing similar wafer bonding techniques, which promises to enable a new class of chips.

In the R&D world, meanwhile, Leti has unveiled an active interposer technology for chiplet-based designs. On the interposer, Leti stacked six chiplets with an aggregate total of 96 cores. Each chiplet is based on 28nm FD-SOI. “Chiplet-based ecosystems will deploy rapidly in high-performance computing and various other market segments, such as embedded HPC for the automotive and other sectors,” said Pascal Vivet of Leti.


Fig. 2: Leti’s active interposer. Source: Leti

OSATs, meanwhile, also want to participate in the market in one form or another. Fan-out is one example. “You don’t have to limit it to a single die with fan-out. You can do both heterogenous and homogeneous integration, where you split your dies up and combine them in a fan-out package,” said John Hunt, senior director of engineering at ASE.

All told, the industry has proven the chiplet model works. Generally, though, the current products utilizing chiplets are based on proprietary die-to-die interconnect/interface schemes. Larger companies can afford to develop architectures with proprietary technologies. But most companies don’t have the time or resources to go down this path, so there is a need for open, off-the-shelf solutions.

That’s where standards fit in. For example, the ODSA is driving the development of an open die-to-die interconnect/interface standard between chiplets. ODSA also is working on a chiplet design exchange. In theory, all of this would enable customers to develop chiplet-based designs.

“Our basic aim is to create a mechanism by which you can create products when mixing and matching chiplets from multiple vendors. That’s not possible today. Almost all multi-chiplet products are single vendor products,” said Bapi Vinnakota, the ODSA subproject lead within the Open Compute Project (OCP).

“When you have two chiplets and if you want them to work together, you have to connect them physically and you need a logical interface between them,” Vinnakota said. “We want to define both the physical and logical connectivity.”

Today, though, the industry has already developed die-to-die interface schemes, namely AIB and XSR. Intel developed AIB. And the Common Electrical I/O (CEI) 112G XSR (Extra Short Reach) interface, designed by the OIF, can be used for chiplets and optical engines.

AIB and XSR may not be suitable for all apps. In response, ODSA is defining two new physical-layer (PHY) die-to-die interfaces—Bunch of Wires (BoW) and OpenHBI. Last year, ODSA released the 0.7 version of the BoW spec with 0.9 due out this year. Avera/Marvell, zGlue and others are working on the spec.

“The first one we’ve defined is a Bunch of Wires. It’s a guiding principle for how we make this PHY as easy to design as we can, while still meeting the density requirements of a large number of applications,” Vinnakota said.

BoW is a die-to-die parallel interface, supporting 28nm to 5nm dies. BoW supports conventional and advanced packages, including low-cost organic substrates and interposers. The goal is to develop an interface with a bandwidth of >100Gbps/mm for all packaging options and >1Tbps/mm for select packages. The energy efficiency target is <1pJ/bit.

ODSA is developing another interface spec — OpenHBI or Open High Bandwidth Interconnect for chiplets. Proposed by Xilinx, OpenHBI leverages the HBM physical-layer specification. The initial technology, dubbed OpenHBI-2, is based on the HBM2/2e-PHY spec. It enables a die-to-die interconnect over interposers, fan-out and fine-pitch organic substrates.

OpenHBI-2 calls for a link rate of 2.4-3.2Gbps, a 3mm reach, and an I/O of 0.9pJ/bit at 1.2V. A draft is slated by year’s end. The next version, OpenHBI-3, is in R&D.

Conclusion
Clearly, the chiplet model is intriguing. It already is enabling a new class of designs. More are in the works.

New ideas provide an engine for growth in the IC world. That’s badly needed, especially as the costs for traditional chips are soaring out of control.

 

Related Stories

Waiting For Chiplet Interfaces

Getting Down To Business On Chiplets

The Race To Next-Gen 2.5D/3D Packages

What’s Next In Advanced Packaging



Leave a Reply


(Note: This name will be displayed publicly)