IDMs leverage chiplet models, others are still working on it.
The chiplet model continues to gain traction in the market, but there are still some challenges to enable broader support for the technology.
AMD, Intel, TSMC, Marvell and a few others have developed or demonstrated devices using chiplets, which is an alternative way to develop an advanced design. Beyond that, however, the adoption of chiplets is limited in the industry due to ecosystem issues, a lack of standards and other factors. Work is underway to solve these issues. And behind the scenes, several foundries and OSATs are putting the pieces in place to help customers with chiplets.
With chiplets, the goal is to reduce product development times and costs by integrating pre-developed dies in an IC package. So a chipmaker may have a menu of modular dies, or chiplets, in a library. Chiplets could have different functions at various nodes. Customers can mix-and-match the chiplets and connect them using a die-to-die interconnect scheme.
This isn’t a new concept. Over the years, several companies have shipped chiplet-like designs, but the model is beginning to snowball for good reason. For advanced designs, the industry typically develops a system-on-a-chip (SoC), where you shrink different functions at each node and pack them onto a monolithic die. But this approach is becoming more complex and expensive at each node.
While some will continue to follow this path, many are looking for alternatives. Another way to develop a system-level design is to assemble complex dies in an advanced package. Chiplets are a way of modularizing that approach.
“We’re in the early stages. More and more products from Intel and our competitors are going to reflect this approach moving forward. Every major foundry has a technology roadmap of increasing the interconnect densities for both the 2.5D and 3D integration approaches,” said Ramune Nagisetty, director of process and product integration at Intel. “In the coming years, we will see it expand in 2.5D and 3D types of implementations. We will see it expand into logic and memory stacking and logic and logic stacking.”
Intel and a few others have the technologies in place to develop these products, but many companies don’t have all of the pieces. As a result, they will need to locate the technologies and find a way to integrate them, which presents some challenges. Among them:
Work is underway to overcome all of these challenges, and over time the chiplet model will expand. It will not replace traditional SoCs, but no one technology can meet all needs, so there is room for multiple architectures. Many will never develop chiplets.
Chiplet apps and challenges
For decades, chipmakers introduced a new process technology every 18 to 24 months. At this cadence, vendors introduced new chips based on the latest process, enabling devices with more transistor density at lower costs.
This formula began to unravel starting at the 16nm/14nm node. Suddenly, IC design and manufacturing costs skyrocketed, and since then the cadence for a fully scaled node has extended from 18 months to 2.5 years or longer. Of course, not all chips require advanced nodes. And not everything that currently is put on the same die benefits from scaling.
This is where chiplets fit it. A bigger chip can be broken down into smaller pieces and mixed and matched as needed. Chiplets presumably have a lower cost and better yield than a monolithic die.
A chiplet isn’t a package type. It’s part of a packaging architecture. With chiplets, dies could be integrated into an existing package type, such as 2.5D/3D, fan-out or multi-chip modules (MCMs). Some may develop entirely new architectures using chiplets.
All this depends on the requirements. “It’s an architecture methodology,” said Walter Ng, vice president of business development at UMC. “It’s optimizing the silicon solution for the required task. It’s also optimizing the economic solution. All of those have performance considerations, whether its speed, heat or power. It also has a cost factor, depending on what approach you take.”
There are different approaches here. For example, using a chiplet methodology called Foveros, Intel last year introduced a 3D CPU platform. This combines a 10nm processor core with four 22nm processor cores in a package.
Fig. 1: 2.5D and 3D technologies using Intel’s bridge and Foveros technologies. Source: Intel
AMD, Marvell and others also have developed chiplet-like products. Generally, these designs are targeted for the same applications as today’s 2.5D packaging technologies, such as AI and other data-intensive workloads. “Logic/memory on an interposer is probably the most common implementation now,” Intel’s Nagisetty said. “In high-performance products that require large amounts of memory, you will see a chiplet-based approach.”
But chiplets won’t dominate the landscape. “There’s an ongoing increase in the types and numbers of devices,” Nagisetty said. “I don’t think all products will go to a chiplet-based approach. In some cases, a monolithic die is going to be the lowest-cost option. But for high-performance products, it’s safe to say that a chiplet-approach is going to become the norm, if it’s not already.”
Intel and others have the pieces in place to develop these designs. Generally, to develop a chiplet-based product requires known-good dies, EDA tools, die-to-die interconnect technologies, and a manufacturing strategy.
“If you look at who is doing chiplet-based designs today, they tend to be vertically integrated companies. They have all the pieces in-house,” said Eelco Bergman, senior director of sales and business development at ASE. “If you are going to stitch together several pieces of silicon, you need to have a lot of detailed information about each of those chips, their architectures and the physical and logical interfaces on those chips. You need to have EDA tools that allow the co-design of different chips to be tied together.”
Not all companies have the pieces in-house. Some pieces are available, while others aren’t ready. The challenge is to locate the necessary pieces and integrate them, which will take time and resources.
“Chiplets seem to be the hottest topic right now. The main reason is because of the diversity of applications and architectures required at the edge,” said Scott Kroeger, chief marketing officer at Veeco. “Chiplets could help resolve that, if it’s done right. There is still a lot of work to be done there. The question is how do you get to the point where you can start to incorporate all these different types of devices into one.”
So where does one start? For many, design service companies, foundries and OSATs are possible starting points. Some foundries not only manufacture chips for others, but they also provide various packaging services. The OSATs provide packaging/assembly services.
Some are already preparing for the chiplet era. For example, TSMC is developing a technology called System on Integrated Chip (SoIC), which enables 3D-like designs using chiplets for customers. TSMC also has its own die-to-die interconnect technology called Lipincon.
Other foundries and OSATs provide various advanced package types, but they are not developing their own die-to-die interconnect schemes. Instead, foundries and OSATs are working with various organizations that are developing third-party interconnect schemes. This is still a work in progress.
Interconnects are critical. A die-to-die interconnect joins one die to another in a package. Each die consists of an IP block with a physical interface. One die with a common interface can communicate to another die via a short-reach wire.
Many have developed interconnects with proprietary interfaces, meaning they are used for a company’s own devices. But to broaden the adoption of chiplets, the industry requires interconnects with open interfaces, enabling different dies to communicate with each other.
“If the industry wants to move toward an ecosystem that supports chiplet-based integration, that would mean different companies would have to start sharing chip IP with each other,” ASE’s Bergman said. “These are things that are not traditionally done. That’s a hurdle. There is one way to overcome that. Instead of sharing all of the die IP, the devices implement an integrated standard interface.”
For this, the industry is taking a page from the DRAM business. DRAM makers use a standard interface, DDR, to connect chips in systems. “[Using this interface,] I don’t need to know the details of the memory device design itself. I just need to know what the interface looks like and how I need to connect to my chip,” Bergman said. “The same will be true when you start talking about chiplets. The idea is to lower that hurdle of IP sharing to say, ‘Let’s drive toward some common interfaces so that I know how the edges of my chip and your chip need to click together in a modular, LEGO-like fashion.”’
Finding standard interfaces
The good news is that companies and organizations are developing open die-to-die interconnect/interface technologies. These technologies include AIB, BoW, OpenHBI and XRS. Each one is in various stages of development. No one technology can fit all needs, so there is room for several schemes.
Developed by Intel, the Advanced Interface Bus (AIB) is a die-to-die interface scheme that transports data between chiplets. There are two versions. AIB Base is for “lighter-weight implementations,” while AIB Plus is designed for higher speeds.
“AIB doesn’t specify a maximum clock rate, and the minimum is very low (50MHz). AIB shines at high bandwidths and the typical data rate per wire is 2G per second,” said David Kehlet, research scientist at Intel, in a white paper. Intel also has a small commercial foundry business, as well as a significant internal packaging unit.
Meanwhile, the Optical Internetworking Forum is developing a technology called CEI-112G-XSR. XSR enables 112Gbps per lane die-to-die connectivity for ultra- and extra-short reach apps. XSR connects chiplets and optical engines in MCMs. Applications include AI and networking. A final version of the XSR standard is expected by year’s end.
In a separate effort, the Open Domain-Specific Architecture (ODSA) group is defining two other die-to-die interfaces—Bunch of Wires (BoW) and OpenHBI. BoW supports conventional and advanced packages. “The original goal was to come up with a versatile die-to-die interface that can work across a wide range of packaging solutions,” said Ramin Farjad, CTO of networking/automotive at Marvell, in a recent presentation.
Still in R&D, BoW comes in two flavors, terminated and unterminated. BoW has a chip-edge throughput of 0.1Tbps/mm (simple interface) or 1Tbps/mm (advanced interface) with a power efficiency of <1.0pJ/bit.
Meanwhile, proposed by Xilinx, OpenHBI is a die-to-die interconnect/interface technology derived from high-bandwidth memory (HBM). HBM itself is used in high-end packages. In HBM, DRAM dies are stacked, enabling more memory bandwidth in systems. A physical-layer interface routes signals between the DRAM stack and an SoC in the package. The interface is based on a JEDEC standard.
OpenHBI is a similar concept. The difference is that the interface provides a link from one chiplet to another in a package. It supports interposers, fan-out and fine-pitch organic substrates.
“We are trying to leverage a proven JEDEC HBM standard,” said Kenneth Ma, principal architect at Xilinx, in a recent presentation. “We are trying to leverage the existing and the proven PHY technology. We can further optimize them.”
The OpenHBI spec has a 4Gbps data rate, 10ns latencies, and a 0.7-1.0pJ/bit power efficiency. The total bandwidth is 4,096Gbps. A draft is slated by year’s end. The next version, dubbed OpenHBI3, is also in R&D. It calls for 6.4Gbps and 10Gbps data rates with <3.6ns latencies.
Eventually, customers will have several die-to-die interconnect/interface options to choose from, but that doesn’t solve every problem. “The interoperability of chiplets coming from different companies is still very nascent. And that interoperability aspect does have challenges. That’s why you don’t see a lot of interoperable chiplets yet,” Intel’s Nagisetty said. “The other aspect of it is the business model. How do you manage the risk when perhaps you’re getting chiplets from a startup? For example, if those die potentially fail after the part has been packaged or in the field, what’s the business model in terms of that risk management. There’s a lot of complexity and supply chain management. It requires a whole new level of sophistication in the supply chain.”
With those issues in mind, some customers may think that chiplets aren’t worth the trouble in the long run. Instead, customers may end up developing a more traditional advanced package using an OSAT or foundry. “Many in the packaging industry may follow our path in the end as it is more simplistic in package reintegration,” said Ron Huemoeller, vice president of R&D at Amkor.
“A die-to-die bus type is typically something that is defined by our customers and not dictated by Amkor or an OSAT. Available interfaces like AIB and Bunch of Wires (BoW) are examples of ongoing efforts to make common specifications available for die-to-die interfaces, thus helping to enable the chiplet market in total. The option to use open standards or stay with a proprietary interface is always the customer’s choice. We currently see a mixture of both approaches from our customer base,” Huemoeller said. “It is important to note, the die-to-die interfaces span two broad categories from single ended wide buses (like the HBM data bus), to serialized interfaces with few physical lines but at much higher line speeds. The performance tradeoffs to be considered in all cases are latency, power and the number of physical wires, which influence the package selection. From a packaging standpoint, the bus type and physical line density will drive which package solution is chosen. Typically, either (1) a module type (2.5D or high-density fan-out on substrate) with higher wire density, or (2) MCMs on classic high-density package substrates.”
Design issues
Hoping to solve many of those issues, the ODSA is developing a chiplet marketplace called the Chiplet Design Exchange (CDX). “The CDX aims to establish open formats for secure information exchange that preserve confidentiality. It will also have reference workflows that demonstrate the information flow for prototypes,” said Bapi Vinnakota, the sub-project lead for OSDA. “The CDX has broad participation from a range of companies, EDA vendors, OSATs, design service companies, chiplet vendors and distributors. The CDX has conducted studies on power estimation and testing for chiplets. It is building a chiplet catalog and will develop a packaging prototype.”
The timing of the CDX remains unclear. Meanwhile, customers require EDA tools to design chiplet-enabled products. The tools are available for advanced packages and chiplet technologies. There are some gaps, however.
For chiplets, it requires a co-design approach. “Moving to a disaggregated chiplet-based design methodology requires functionality from the IC, package and board domains,” said John Park, product management group director at Cadence. “Transitioning to a chiplet-based approach brings new challenges for both chip designers and package designers. For the package designer, doing layout and verification of silicon substrates presents new challenges. Requirements such as layout versus schematic and smart metal balancing are commonplace for IC designers, but for many package designers these are new concepts.”
Fortunately, EDA vendors offer cross-platform tools. Even then, there are several challenges. “For example, when moving from designing a single device to designing and/or integrating with multiple devices, the requirement for defining and management of top-level connectivity become crucial,” Park said. “Test is another area that changes significantly when designing multiple chiplets in a 3D stack. For example, how do you test the chiplet in the top of the stack that may not have any connections to the outside world?”
There are other issues. “To enable a good economy of scale, you want chiplets to be easily re-used in many different packages,” said John Ferguson, product management director at Mentor, a Siemens Business. “Doing so requires some strict documentation and adherence to agreed upon standards, whether that is industry-wide, process-wide, or company-wide. Without it, every design will continue to be a time consuming, cumbersome and expensive custom project.”
However, there are some gaps. For example, there is little design support for ODSA’s BoW and OpenHBI interfaces. In response, the ODSA is developing reference designs and workflows.
Developing design support for ODSA’s efforts doesn’t seem to be a problem. “For physical verification, there does not appear to be any significant difficulty or even tool enhancements,” Ferguson said. “As the requirements and standards are determined, it will simply be a question of implementing those as rule constraints into a typical DRC or LVS deck appropriately.”
Making chiplets
Meanwhile, after developing a design, the chips are then processed on a wafer in a fab. Then the wafers undergo a test step. The test cell consists of automated test equipment (ATE), a prober, and a probe card with tiny needles in a custom pattern designed for the wafer.
The prober takes a wafer and places it on a chuck. It aligns the probe card with the wire-bond pads or tiny microbumps on the chips. The ATE performs electrical tests on the die.
“There are significant technical and cost challenges to test and probe chiplets,” said Amy Leong, senior vice president at FormFactor. “A new technical challenge is the significant reduction of the packaging bump pitch and size. The microbumps can be as small as 25μm or below. In addition, the microbump pattern is two to four times denser than an equivalent monolithic device. As a result, the aiming accuracy required to probe such a small feature over a 300mm wafer is equivalent to locating the head of a pin on a football field.”
It’s generally cost-prohibitive and impractical to test every microbump. “The cost challenge is how to perform KGD intelligently and provide good enough test coverage at a reasonable cost. Design-for-test, built-in-self-test or test flow optimization are important tools to enable an economically viable test strategy,” Leong said.
Eventually, the chips are diced. In a package, the dies are stacked and connected via the microbumps. Microbumps provide small, fast electrical connections between different chips.
The dies are bonded using a wafer bonder, which is a slow process with some limitations. The most advanced microbumps have 40μm pitches. Using today’s bonders, the industry can scale the bump pitch possibly at or near 10μm to 20μm.
Then, the industry needs a new technique, namely copper hybrid bonding. For this, chips or wafers are bonded using a dielectric-to-dielectric bond, followed by a metal-to-metal connection. For chip stacking, hybrid bonding is challenging, which is why it’s still in R&D.
There’s another problem, too. In multi-die packages, one bad die can result in the failure of the whole package. “The chiplet approach or the various heterogenous integration approaches all involve complexities that drive the need for effective inspection for high yields and long-term reliability,” said John Hoffman, an engineering manager at CyberOptics.
Conclusion
Clearly, the chiplet model presents some challenges. Nonetheless, the technology is needed. Using chip scaling, monolithic dies are here to stay. But fewer companies can afford them at advanced nodes.
So the industry needs different options, which sometimes can’t be addressed by the traditional solutions. Chiplets offer a range of possibilities and potential solutions.
Related Stories
KGD have been a major problem for a multi chip (chaplet) module for 30 years and until you can test, and burn in, of the bare chaplet to the extent of a packaged device the yield cost can delete any savings.
We had these problems back in the 1980’s and we could not overcome them due to the fact that the wafer manufacturer would not sacrifice alley space for die amount/wafer.