Chiplets are a new buzzword and for good reason.
By Dr. Carlos Macián, senior director AI Strategy & Products, eSilicon Corporation
“It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected.” — Gordon Moore, 1965
“Chiplet” has become a buzzword and like most of its kind, the success of the buzzword predates the widespread availability of the product by a large margin. However, this is such a conceptually sound and attractive idea: How could it fail to become reality, when everybody wants it? Well, because it does not make economic sense. Not in its pure form, anyway. Let me explain.
The idea of chiplets holds great promise and is often associated with the open-source movement, empowering both big and small companies to innovate and create new products. It closely follows the Lego model: Take common but expensive SoC subsystems, such as the CPU or the I/O, place them onto their own separate chip (-let), add a chip-to-chip interface (such as nVIDIA’s NVLink, Intel’s AIB or eSilicon’s High-Bandwidth Interconnect physical layer (HBI™ PHY) and commercialize them standalone. Then customers can use these “building blocks” alongside their own proprietary ASICs to quickly and cost-effectively assemble their large SoCs. Given that all chips have a CPU, if you standardize a couple of widespread configurations and offer them in the open market for a few bucks apiece, you will save customers both the complexity and the expense of buying, integrating and hardening the IP themselves. You will sell them by the millions! A true no-brainer.
Even more importantly, they reduce the risk associated with IP porting: If a critical piece of IP, such as the SerDes in networking applications, has been proven trustworthy in a given older node, you can continue to use it indefinitely, instead of having to port the IP to every new node that your core logic is in. Formidable.
So what is preventing their success? Well, the very meaning of ASIC: application-specific IC. Not generic, but specific. In its 20 years of existence, eSilicon has taped out well over 300 ASICs. To the best of my knowledge, no two of them ever had the exact same CPU configuration: Different number of cores; larger or smaller data and cache memory sizes; different selection of peripheral interfaces and support modules, etc. The cost of developing, productizing and supporting one of these chiplets is easily in the several millions, even for mature nodes. How many different CPU configurations do we need to produce to satisfy the market? How big is our financial gamble? What do we do when a new generation of CPUs comes out or a new peripheral becomes popular or a new interface needs to be added? Who truly benefits from the investment? The customer, but not the vendor, who carries all the risk. So if you are the chiplet vendor, why bother?
Figure 1: AMD Zen 2 EPYC processor. Source: TECHPOWERUP “AMD 7nm EPYC ‘Rome’ CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total” December 15, 2018.
However, there is a different constellation where chiplets do make sense: Inside large companies boasting extensive product families, because in them customer and vendor are one and the same. In a large company, e.g., AMD, families of similar products like the EPYC processor line are common. These processors share a common architecture and a similar set of components, albeit in a different number or combination. The specification of the products, present and future, is known and under the control of the customer-vendor. Hence, building these components in the form of interoperable chiplets makes sense and allows for a vastly accelerated and de-risked roadmap. Therefore, joint ventures between OEMs and ASIC companies make the most sense, where the OEM provides the chiplet spec but also buys the product, while the ASIC company designs and manufactures it.
Figure 2: AMD Zen 2 and Vega 20. Source: MCPRO “AMD responds to the challenges of the professional sector” November 14, 2018
eSilicon has participated in a number of such initiatives (see Figure 3 as an example), always hand in hand with a named OEM in the networking, high-performance computing (HPC) or artificial intelligence (AI) space. eSilicon has provided the IP, the ASIC design and the manufacturing expertise. We even specified (in agreement with the OEM), designed and suggested the standardization of the chip-to-chip interface. The chiplet model works well in this context.
Figure 3: eSilicon and chiplets: A recent example.
There are other complexities associated with the chiplet model at a technical level. For starters, it dictates a 2.xD packaging approach, which is both costly and more sophisticated than regular flip-chip packaging. The assembly process in itself is more complicated, with significant mechanical challenges (e.g., excessive warping requiring new materials for the stiffener ring, need for dummy dies in the free spaces for mechanical stability, limitations about max and min die distances) that significantly constrain the layout and aspect ratio of the system in package (SiP).
Manufacturability is not the only challenge. These complex systems need an enhanced design for test (DFT), test and bring-up plan. One critical concern is to identify known good dies before assembly, for the cost of detecting a defect once assembled is extremely high, given the number of dies involved and the costly package. Latency between components that otherwise would sit on the same die is another factor that needs to be taken into account when designing the system architecture. And so on and so forth. For all of its benefits in die re-use, IP de-risking and accelerated roadmaps, chiplets also represent a step up in systems sophistication.
Conversely, eSilicon has also occasionally joined efforts with consortia in the open-market camp, such as the Open Compute Project. OCP founders hoped to create a movement in the hardware space that would bring about the same kind of creativity and collaboration we see in open source software, with the mission to design and enable the delivery of the most efficient server, storage and data center hardware designs available for scalable computing. Chiplets are but one of the avenues by which OCP expects to accomplish this.
In summary, chiplets are a new buzzword and for good reason. Die re-use, IP de-risking and accelerated roadmaps are all compelling arguments to pay attention to. So is the possibility of standardized Lego-like dies that could facilitate the development of silicon products for a larger number of companies. The most promising model at the time of writing, however, is the joint collaboration between OEMs specifying and buying the product and ASIC companies designing, manufacturing and testing them.
Great article! eSilicon HBI+ solution is also fully compliant with the latest AIB revision broadening the PHY adoption and the chiplet ecosystem!