CXL Signals A New Era Of Data Center Architecture

Moving beyond the classic architecture of the server through disaggregation and composability.


An exponential rise in data volume and traffic across the global internet infrastructure is motivating exploration of new architectures for the data center. Disaggregation and composability would move us beyond the classic architecture of the server as the unit of computing. By separating the functional components of compute, memory, storage and networking into pools, composed on-demand to match the specific requirements of varied workloads, greater performance, efficiency and total cost of ownership (TCO) could be achieved.

Compute Express Link (CXL), supported by a broad consortium of hyperscalers, equipment OEMs, chip makers and IP suppliers, has emerged as a new enabling technology for interconnecting computing resources. CXL, now at the 2.0 generation, makes possible high-speed, low-latency links with memory cache coherency between processors, accelerators, NICs, memory and storage. It leverages PCI Express 5.0 (PCIe 5.0) for its physical layer harnessing the standard’s tremendous momentum and industry knowledge base.

Rambus has announced the launch of the CXL Memory Interconnect Initiative, spearheading research and development of solutions for a new era of data center architecture. Concurrently, we announced the acquisitions of PLDA and AnalogX to supercharge this initiative (the AnalogX acquisition closed on July 6th, and the PLDA acquisition is expected to close in the current calendar quarter as well). PLDA and AnalogX bring products and engineering talent that expand the Rambus IP portfolio for CXL 2.0 and PCIe 5.0, accelerate our roadmap for next-generation CXL 3.0 and PCIe 6.0 solutions, and provide key building blocks for CXL memory interconnect chips.

PCIe 6.0 doubles the standard’s data rate from 32 to 64 GT/s. In previous generations, data rate doublings were realized through a doubling of the Nyquist frequency. 32 GT/s PCIe 5.0 has a Nyquist frequency of 16 GHz with NRZ signaling. Doubling the Nyquist frequency to 32 GHz, however, is impractical. Non-linear increases in noise and crosstalk would drive channel losses to 60dB or more. So PCIe 6.0 moves from NRZ to PAM4 signaling.

With PAM4, the data rate is quadruple the Nyquist frequency using four signal voltage levels rather than two with NRZ. That allows PCIe 6.0 signaling to hit 64 GT/s while keeping the Nyquist frequency at 16 GHz. The downside is that the available voltage budget between signals falls to only 1/3rd that of NRZ. Timing margin also shrinks with the increased signal transitions of PAM4. With smaller margins, the impact of jitter, crosstalk and all the other contributors to noise are magnified. As a leader in SI/PI, and with nearly two decades of pioneering work in PAM4 signaling, Rambus is ideally positioned to help designers tackle the implementation challenges of PCIe 6.0 and CXL 3.0 which will leverage the PCIe 6.0 physical layer.

Two compelling use models enabled by CXL technology are memory expansion and memory pooling. The former offers the flexible addition of more memory capacity to a processor beyond that of its main memory channels. Going further, memory pooling enables a many-to-many connection between hosts (processors) and devices (memory nodes) so the amount of capacity available to a processor could be both greatly expanded for and finely tailored to its current workload. When no longer needed, this flexible additional memory can be released back to the pool. Promising more performance, higher efficiency and greater TCO, memory pooling steps us toward a fully disaggregated and composable architecture.

The CXL Memory Interconnect Initiative is the latest chapter in Rambus’ 30+ year history of advancing the leading edge of computing performance. It will leverage the company’s expertise in memory and SerDes subsystems, semiconductor and network security, high-volume memory interface chips, and compute system architectures. The addition of PLDA and AnalogX accelerates this endeavor to shape the future of the data center.

Additional Resources:

Leave a Reply

(Note: This name will be displayed publicly)