Proprietary Vs. Commercial Chiplets

Who wins, who loses, and where are the big challenges for multi-vendor heterogeneous integration.


Large chipmakers are focusing on chiplets as the best path forward for integrating more functions into electronic devices. The challenge now is how to pull the rest of the chip industry along, creating a marketplace for third-party chiplets that can be chosen from a menu using specific criteria that can speed time to market, help to control costs, and behave as reliably as chiplets developed in-house.

So far, third-party chiplet use has been spotty. The general consensus is that a third-party chiplet marketplace will flourish at some point, in part because it’s cheaper to buy chiplets than to build them, providing there are sufficient standards in place for interoperability. The big unknowns are how these chiplets will perform compared to those developed in-house, which in turn will affect the pace of adoption, the total available market opportunity, and the subsequent rate of market consolidation. This is due to several variables:

  • An estimated 30% to 35% of all new chip design starts are for internal use among large systems companies. So rather than using off-the-shelf processors and IP, these companies are designing systems from scratch to optimize their internal processes or data types. Some of the chiplets developed for these applications are highly specialized and competitive secret sauce, but there are plenty of other functions within these systems that can be developed by third-party chiplet developers.
  • Mini-consortia are forming around different domains, such as biological or automotive applications. Some of these involve foundries and OSATs, which are beginning to develop standards for what are essentially assembly design kits, while others are developing organically. But in all cases, the focus is on mass production of chiplet-based designs with predictable yields.
  • Unlike soft IP, which had to conform to a specific process technology, chiplets can be developed at any process node. Whether they are interchangeable from one foundry to the next remains to be seen. Nevertheless, the ability to mix and match process nodes opens the door to many more choices. For example, instead of mostly digital chips with some analog, which was required in planar SoCs at the most advanced nodes, developers can create fully analog chiplets at whatever node works best. That opens up a whole new potential marketplace based on PPAC.

Chiplets represent one of the most fundamental shifts since the start of Moore’s Law. The idea has been floating around for decades, but until the introduction of finFETs, the benefits of planar scaling always outweighed the massive challenge of revamping the supply chain, design and manufacturing processes, updating or adding new equipment, and the disruption caused by breaking down silos and shifting methodologies. All of this remains a massive challenge, and it’s one of main reasons why it’s so hard to predict the pace of change.

Large chipmakers, including AMD, Intel, and Marvell already have proven that chiplets work, and they are reaping the benefits of their pioneering work. But if history is any indication of how things will evolve, it’s not economically advantageous for these companies to develop all of what is essentially hardened IP themselves. This is one of the reasons they are supporting new standards for interoperability, particularly UCIe, and promoting the idea of a commercial marketplace. In addition, various government agencies have established goals for utilizing off-the-shelf commercially available chiplets as a way of speeding time to market and ultimately reducing costs.

Fig. 1: NIST’s chiplet architecture. Source: NIST

Still, as these initial implementations have proven, integrating chiplets and assembling them is much harder than it sounds. Big chipmakers have made this happen by building what are essentially chassis for chiplets. This allows chiplets to be designed within specific parameters, such as area, noise (electromagnetic, power, substrate, etc.), interconnect location, material interactions, and many other characteristics. But they also have worked through problems using a narrow set of parameters that are important for their designs.

Fig. 2: AMD’s EPYC Gen 4 processor using chiplets for scalability for different applications. Source AMD/Hot Chips 2023

Intel’s approach to chiplets is similar. (see Fig. 3, below)

Fig. 3: Intel’s scalable Xeon architecture based on compute chiplets and its multi-die interconnect bridge (EMIB). Source: Intel/Hot Chips 2023

“Even if you have a set of guides for a set of chiplets that will play well together, you still have various process variations, packaging variations, and so on,” said Ira Leventhal, vice president for U.S. Applied Research and Technology at Advantest America. “You can provide capabilities like die matching and support shift left to find defects sooner so there’s not a ton of packaging costs and scrap. But in this more complex environment, how do you optimize yield? This is really important, even if you have an optimal set of things that will play together. Much more needs to happen in the various steps to see it through.”

Chiplets also are being used on a highly targeted basis for other designs, where they are being integrated into advances packages by OSATs. In these cases, they are being used like hardened IP, rather than as part of a chiplet-based design.

“We see our customers using chiplets for high-performance computing, as well as in network switches,” said Ou Li, senior director of engineering at ASE, at a recent CHIPCon panel. “They care about the performance and the signal integrity. For example, you could use optical I/O to replace high-speed SerDes. So chiplets are being used across multiple markets and multiple applications today, and the adoption rate will go higher and higher in the future.”

For a commercial marketplace, standards are still being developed for how exactly chiplets are characterized. Nevertheless, there are some definite advantages to this approach. Because chiplets are smaller than a reticle-sized SoC, yield generally is higher. The real integration challenges are external to the chiplet. There also are challenges involving how to test and inspect them individually and after assembly, and to measure things like die-shift. For example, the dynamic power density from a particular use case can increase the amount of heat due to resistance and static leakage current. That, in turn, causes warpage, which stresses the bonds that hold the chiplets in place. Dealing with all of this requires new flows to account for these problems earlier in the design cycle, as well as new equipment and entirely new process steps.

The benefits of chiplets
There are three main reasons for using chiplets. First, they can be mixed and matched regardless of process node, which significantly reduces the cost to develop semiconductor devices.

“As these domain-specific architectures specialize more and more, and if it truly drives us into differentiated technology for each architecture, it has the potential to be very disruptive to the fabs, the equipment manufacturers, and the rest of the ecosystem,” said David Fried, corporate vice president of computational products at Lam Research, in a recent panel discussion at SEMI on the future of computing. “And if you look back 15 to 20 years ago, when monolithic integration node after node was chugging along, we looked at some of the process innovations required for heterogeneous integration as additive, and a little bit painful, and that’s why we kept chugging along. But now, if you look at the cost to reach the next node, specifically with monolithic integration, then all of a sudden these heterogeneous integration processes seem really cheap.”

Second, chiplets can swapped in or out of designs to customize them for specific domains and applications. This allows chipmakers to create designs that are highly targeted for specific applications, customize similar chips for more specific domains and use cases, to update them without re-creating everything from scratch, and to add more features than would be available on a single chip that would otherwise be limited by the size of the reticle.

“As we look forward in terms of having different technology nodes, we can mix and match them together and keep some of the analog stuff in a bit more stable technology than the newest one,” said Swadesh Choudhary, silicon architecture engineer at Intel, during a CHIPCon 2023 panel discussion. “You can integrate different accelerators with the same compute engine and potentially accelerate time to market with custom packages for different applications. You can do this easier with chiplets in a package.”

A third major benefit of chiplets is they can significantly speed up time to market even for first-time designs, enabling chipmakers to get to market much faster.

“At the end of the day, it is about PPA and time to market,” said Mike Kelly, vice president of chiplets/FCBGA integration Amkor Technology. “This started in the high end. The data center guys pushed it first, and perhaps the hardest. But it is trickling down into just about every compute class that we see today. It’s certainly in the data center. It’s also in the PC market and phones. And cars are becoming compute-intensive and facing the same constraints as everyone else. Modern nodes are expensive and wafer costs are high. You manage that by breaking out the really high-performance pieces. Is this becoming ubiquitous? Well, it’s a long S curve. But it’s definitely transitioning into new places.”

And for good reasons. Antonio Varas, chief strategy officer at Synopsys, noted at the SEMI event that today only about 35% of chip design projects are on schedule, and about 25% achieve first-time silicon success. Alongside of that, demand is increasing by 9% to 11% per year, while supply is increasing 7% to 9%. By 2030, demand is expected to increase by 17%, largely because semiconductors are being used in a variety of new markets and for new applications.

This is where chiplets come into the picture. But to make all this work requires standards at every level — and that’s just for starters.

“You definitely need standards,” said Paul Rousseau, vice president of field technical solutions at TSMC. “That’s the whole idea behind 3DFabric and 3Dblox, and there are multiple levels. One level is on the EDA side or I/Os, where UCIe is emerging as a standard to communicate chip-to-chip. Why would you want to use a different type of I/O, unless you had huge benefit? The other thing is on the silicon itself or the packaging. We definitely have some envelopes we know are going to work. One of the challenges is everyone comes in with a fancy idea. ‘This is going to be the best thing since sliced bread.’ But that takes huge development time to prove out. So we’re trying to get people onto standard solutions. That’s what we do with our silicon. We have design rules and models we know are going to work. There’s the same goal for packaging. It’s not to reinvent the wheel every time.”

Initially this means more limited choices for commercially available chiplets. But whether it means less optimal designs is harder to determine, because decomposing SoCs into different hardened functions allows design teams to more easily prioritize those functions and partition designs. And if there are standards for how the various chiplets go together and how they are tested, they also may be more reliable over time than one-off designs.

“Somebody’s got to pull all this together and do what I’ll call a high-level design that integrates not just the chiplets, but whatever interconnect or substrate or interposer technology that you’re utilizing, as well,” said Dick Otte, president and CEO of Promex Industries, which is participating in one of the mini-consortia. “Then you get to the third part, which is the assembly surfaces, which is our role. We can do the ones that require physically assembling things, but we would not be a good partner if the design was going to use the RDL approach to generate interconnects, which is a chip-first chiplet version. And then there’s the whole issue of test. These are all relatively independent activities when you get right down to it.”

One of the big changes in chip design is the focus on how data moves through a chip, which is important as the amount of data that needs to be processed continues to grow. This has prompted a slew of changes, such as new materials and different ways of putting devices together. One of the areas of high interest, and which almost always prompts questions at industry events, is hybrid bonding. This technology was first implemented in image sensors due to the need to stream video and large images more quickly than standard interconnects allowed. UMC, for example, inked a deal with Cadence in February 2023 to provide a platform that can speed up this process, particularly for mature nodes, which is where many chiplets will be developed.

Some of the fastest computers available today, for example, use components such as commercially available Arm cores. The key there is more about data paths and physical connection to memory, hardware-software co-design, and sparser algorithms for AI/ML. And as computing becomes more distributed, such as cars and portable devices communicating with smart-city infrastructure, the real value may be less about who creates the fastest processor and more about seamless connectivity.

The bigger challenges may be on the business side. “There’s a question of whether this translates to a commercial chiplet market where you can source it cheaply from third parties and integrate it into your design,” said Varas. “You require much more than standard interfaces. There’s a business model question. How do you qualify chiplets? How do you test them? So the standard interfaces will happen. But the second part is not as simple as an evolution of the IP model. It’s way more complicated. The technology might be do-able, but it also includes business models, supply chain coordination, and so on.”

In addition, with commercial chiplets there are issues about sharing of data. One of the big advantages that large chipmakers have is the ability to share data within their company so that chiplets can be optimized for the end application or use case. Exchanging data between different companies is far more difficult because companies are extremely concerned about data leakage or theft.

“There’s data security and there’s data sharing,” said Lam’s Fried. “Those are not mutually exclusive, and people have to digest that. We’re starting to break down the barrier of using the cloud for things we were afraid of using the cloud for. What we’re struggling to break down is data sharing for higher value. One example where this works is with airplanes. The owners of those airplanes share maintenance records and data with the aircraft engine manufacturers for their digital twins and digital threads, and they can model failures and predictive maintenance. There are companies in the aerospace industry that are sharing intimate data of their products across all these different companies, and it’s for the better of all those companies. That just isn’t happening much in our industry. It’s happening in banking, like when you swipe your credit card and it does instantaneous checks for fraud. Those models are built on data from multiple different banks, all federated together. This is where we’re failing as an industry, because we’re 10 years behind banking and aerospace in data sharing across the ecosystem.”

Systems vs. chips
There are other challenges on the design side. Integrating chiplets into a package moves the design problem well beyond a single chip. It’s now a collection of chips that need to work together, and which are no longer developed by a single team in one location.

“We’ve moved from designing a chip to designing a system,” said Synopsys’ Varas. “There are three main problems we are dealing with. We have new complexity vectors, which requires parallelization of system design. And we have a talent shortage. Today, 60% of the users of EDA are classical semiconductor companies. The other 40% are hyperscalers, startups, and ASIC or IP vendors. Between 2019 and 2022, the number of design starts for advanced chips increased 44%, but the expanding ecosystem also increased fragmentation. There are more choices and complexities, which are increasing disruptions in design.”

ASE’s Li agrees. “Designers need to have different practices than in the old days, which were PCB-centric with GUI-heavy design tools,” she said. “Not they need to design a fan-out or active interposer. So the old package designer now has to manage IC-level tools to the LVS (layout vs. schematic), CRV (constrained random verification) test, and probably run some SIPO (serial in/parallel out) analysis. And we need to have the same wavelength as our customers. In addition, the package is getting bigger, and to control the warpage you need innovation in material and the process technology. We need to have known good die, known good modules, and we need to do multiple test insertions to make sure every process is good and that they yield. And lastly, component-level test will not pass anymore, which is why we are adopting and implementing system-level test.”

Chiplets will become pervasive at some point. There simply is not enough volume to support the cost of shrinking everything on a chip, and ultimately companies will focus on what they do best — their so-called secret sauce — and let others develop the components that are not contributing to the competitiveness or differentiation of their products.

The main problems, at least initially, have to do with finding standard ways to integrate chiplets into devices, ensure they will work as expected over time, and how to share data so the industry can move forward rapidly. These include both business and technology issues, and there are plenty of them. And while big chipmakers are largely going it alone, that will change as costs come under scrutiny and competition levels the playing field for some components. But how quickly all of these factors will change, and what provides the competitive edge, will vary by market, by company, and by new developments in technology, business, and geopolitics that crop up across the industry.

The direction is clear for chiplets. There are enough forces driving it. But the timing, unique challenges, and ecosystem cooperation will remain more difficult to deal with in the near term, and possibly much longer.

Related Reading
Chiplets: 2023 (EBook)
What chiplets are, what they are being used for today, and what they will be used for in the future.
Preparing For Commercial Chiplets
What’s missing, what changes are underway, and why chiplets are increasingly necessary.

Leave a Reply

(Note: This name will be displayed publicly)