An open, plug-and-play chiplet ecosystem still faces significant hurdles in interconnect standardization and packaging.
Chiplets are big business, and that business is growing. The total chiplet market today is roughly $40 billion annually.
Chiplets account for roughly 15% of TSMC’s revenues, and they account for about 25% of all DRAMs.
All of the major AI/HPC semiconductor companies (NVIDIA, AMD, Marvell, Broadcom) and the major hyper scalers (Amazon, Google, etc) are looking to chiplets to build superior solutions. And, at least so far, they remain a technology for big players.
And while some of those chiplets will be developed by smaller chipmakers for well-defined sockets, the chiplet market is not an open, plug-and-play market. The only open-market chiplet today in production volume is for HBM memory. Logic chiplets are full-custom for each high-volume application. None are UCIe-compatible.
That hasn’t affected their usefulness, however. All data-center class AI products are made with chiplets. The same is true for data-center class processors, FPGAs, and Apple Mac processors. Collectively, these applications drive almost all the chiplet volume today.
There are several main reasons these products use chiplets:
The volumes of these products are huge, so their chiplets are optimized for their application.
The “big player” chiplets are optimized for silicon interposers, which are basically metal interconnect on very large die using 40 to 65nm processes. They still cost up to $1,000 each, but the benefits big players get from chiplets and the high value of their products allow them to cover the cost.
Silicon interposers are too expensive for other applications. Advanced packaging is expensive, too, and available capacity is consumed by the big players.
There have been multiple efforts to create an open chiplet ecosystem:
But challenges remain for creating an open chiplet ecosystem. Interconnect standards are spotty and confusing. Even UCIe has multiple flavors. And it’s not just the physical layer. The link and logical layers need to be interoperable, as well. Add to that the cost of developing a library of chiplets for a wide variety of applications and use cases.
And given the unlikelihood that silicon interposers will become much cheaper, the mass market will require chiplets that are compatible with standard packaging. This means organic substrates capable of handling high data rates and which are more routable. Today, the interconnect density (wires per mm or bumps per mm2) is about an order of magnitude less than silicon interposers, but alternatives are in development.
A minimum set of standard packaging chiplets with standard interfaces (physical, link, logical) must be available with rigorous testing to ensure low yield loss in packaging, as well. This won’t happen any time soon. The cost to develop a chiplet is high, and the return on investment is unattractive until there is a volume market. It’s the classic chicken-and-egg problem.
What is most likely is a few large companies will develop attractive chiplets for their own needs then partner with other large companies who will integrate them in their own solutions. There are some early indications this will happen. NVIDIA plans to supply GPU chiplets to Mediatek to integrate them into the latter’s automotive SoCs. Tenstorrent will supply AI and compute chiplets to LG for TV and automotive products. And Alphawave and CREDO have high speed SerDes chiplets available or in development.
When combining chiplets from more than one supplier there will be interoperability issues and yield issues that must be resolved. But this starts the path to plug-and-play interoperability. That will take years to smooth out, but eventually it will happen. The path forward is clear. The timing is a little fuzzy.
Leave a Reply