Die-to-Die Interconnects for Chip Disaggregation

How to speed up communication between dies in a package.

popularity

Today, data growth is at an unprecedented pace. We’re now looking at petabytes of data moving into zettabytes. What that translates to is the need for considerably more compute power and much more bandwidth to process all that data. In networking, high-speed SerDes PHYs represent the linchpin for blazing fast back and forth transmission of data in data centers.

In turn, demand is increasing for faster processors and the ability to exploit the benefits of smaller nodes like 7 nanometer (nm). But as we all know, chip manufacturing and design costs increase with each node. For example, if you’re on a 7 nm node, mask cost could be upwards of about $10 million, making the impact of even a single re-spin a critical swing factor in the overall cost of chip design.

A brainchild of DARPA, the idea of chiplets has been in existence since the 80’s. However, it’s attracting considerable attention now as the industry is forced to look at alternatives beyond traditional monolithic solutions. One approach the industry is investigating is the idea of chiplets on a substrate to reduce the cost of complex semiconductor solutions.

Mark LaPedus of Semiconductor Engineering so aptly described chiplets saying, “The basic idea is that you have a menu of modular chips, or chiplets, in a library. Then, you assemble chiplets in a package and connect them using a die-to-die interconnect scheme. In theory, the chiplet approach is a fast and less expensive way to assemble various types of third-party chips, such as I/Os, memory and processor cores, in a package.”

More specifically, as shown in Fig. 1, a chiplet partitions a large ASIC into smaller components with each one having its own dedicated functionality, such as memory, I/Os, or analog functions so the end result is a simpler ASIC with other blocks surrounding it. The ASIC is then connected to the blocks using a die-to-die interface. A second example would be to split an SoC into modular die and move SerDes IP to a separate die. This is performed with a die-to-die interfaces between the SoCs and SerDes chiplets.


Figure 1

Both use cases bring major advantages. One, it optimizes the SoC yield by breaking large SoC into better yielding smaller dies due to geometric function. Two, it allows companies to do modular designs and create variants of a known good product.

Die-to-die interconnect IP is the key to having a successful solution with a multi-die approach and there are some key design considerations to take into account when thinking about an optimal design for such an IP. One key requirement is low power, as additional power is considered overhead. Additional requirements to consider include, high throughput to enable high bandwidth data transactions and area optimization, especially on beachfront. Lastly, very low latency is required for some applications.

There are a few choices for ultra-short reach die-to-die IP implementations, but SerDes interconnects are a popular approach, particularly for applications like multi-terabit (12Tbps +) Ethernet switches, photonics integration, and others similar to these. To achieve the terabyte range, you need to have eight lanes at 100 Gigabit plus SerDes per lane with roughly a picojoule per bit of efficiency.

In order to deliver this level of throughput, the Optical Internetworking Forum (OIF) is currently working on a standard that will deliver over 100Gbps per lane. The work-in-progress is known as the “Common Electrical Interface – 112G-XSR.” According to OIF’s website, “This project will develop Implementation Agreements (IA) specifications for die-to-die (D2D) and die-to-OE (D2OE) electrical I/O interfaces which can be used to support Nx112G I/O links with significantly reduced power, complexity, and enhanced throughput density.”



Leave a Reply


(Note: This name will be displayed publicly)