CXL 3.0: From Expansion To Scaling

Key features enable new use models and increased flexibility in data center architectures.

popularity

At the Flash Memory Summit in August, the CXL Consortium released the latest, and highly anticipated, version 3.0 of the Compute Express Link (CXL) specification. This new version of the specification builds on previous generations and introduces several compelling new features that promise to increase data center performance and scalability, while reducing the total cost of ownership (TCO).

CXL was first introduced in 2019 and has evolved rapidly since then. The CXL 1.0/1.1 specification first introduced three separate protocols that enabled prototyping of CXL solutions to address the cloud computing, AI, and analytics mega trends. With the advent of CXL 2.0, new capabilities such as memory pooling, link encryption, and switching were defined that will broaden the deployment of production CXL solutions in disaggregated and heterogenous systems. CXL 3.0 introduces a new era of scalability with the introduction of additional capabilities, such as fabrics and novel device types, that will enable composability beyond the rack.

So, what exactly is new in CXL 3.0? On the interconnect side, there is a huge increase in data rate. CXL 1.x and 2.0 use PCI Express (PCIe) 5.0 for their physical layer, which can operate up to a data rate of 32 Gigatransfers per second (GT/s) using NRZ signaling. CXL 3.0 takes things up a notch as it uses the latest 6.0 version of the PCIe specification released in January this year. This doubles CXL 3.0 data rates to 64 GT/s using PAM4 signaling achieved with no additional latency.

Another big addition that comes with CXL 3.0 is multi-tiered switching and switch-based fabrics. CXL 2.0 allows for a single layer of switching, with CXL 2.0 switches connecting vertically to upstream hosts and downstream devices, but not supporting connections to other switches. This means that the scale is limited to the available ports on a switch. With CXL 3.0, switch fabrics are enabled, where switches can connect to other switches, vastly increasing the scaling possibilities.

Lastly, CXL 3.0 also allows for improved memory sharing and pooling capabilities. CXL 3.0 introduces peer-to-peer direct memory access and enhancements to memory pooling where multiple hosts can coherently share a memory space on a CXL 3.0 device.

These three key features enable new use models and increased flexibility in data center architectures. This facilitates the move to distributed, composable architectures and higher performance levels for AI/ML and other compute-intensive or memory-intensive workloads.

CXL technology continues to enable game changing innovations for the modern data center. Rambus is proud to be an active contributing member of the CXL Consortium and developer of CXL System on Chip (SoC) and IP solutions that will shape the data center of the future.

Additional resources:



Leave a Reply


(Note: This name will be displayed publicly)