PCIe 7.0: Speed, Flexibility & Efficiency For The AI Era

Immense data throughput is required for processing the massive training datasets upon which large language models are based.

popularity

As the industry came together for PCI-SIG DevCon last month, one thing took center stage, and that was PCI Express 7.0. While still in the final stages of development, the world is certainly ready for this significant new milestone of the PCIe specification. Let’s look at how PCIe 7.0 is poised to address the escalating demands of AI, high-performance computing, and emerging data-intensive applications.

The rapid evolution of the PCIe specification over the past couple of years has been driven by several factors. The surge in AI applications, or more specifically generative AI, calls for faster and more efficient data transfer capabilities across the computing landscape. These applications demand immense data throughput for processing the massive training datasets upon which large language models (LLMs) are based.

Wider industry trends, such as disaggregated computing, have also played a role in influencing the specification’s development. Disaggregation involves separating compute, storage, and other resources across PCIe buses, taking the connectivity of the endpoints to the CPU across servers or server racks. This trend enhances flexibility and scalability in data centers but requires robust and high-speed interconnects like PCIe to maintain optimal performance.

So, what exactly is new with PCIe 7.0? The biggest update is, of course, the data rate, which has doubled from 64 GT/s in PCIe 6.0 to 128 GT/s in PCIe 7.0. In addition, while previous generations of PCI Express have relied solely on copper interconnections, PCIe 7.0 will also offer an optical interconnect option. Optical interconnects can advance the path to data center disaggregation. Optical interconnects can transport PCIe signals much further than copper cables and at lower latency, thus providing a means to more easily share resources between multiple servers across the data center. The ability to distribute and share storage, acceleration and memory can drive efficiencies, eliminating redundancy and the need to over-provision resources for worst case compute scenarios, leading to reduced costs.

Despite having an enormous boost in data rate, PCIe 7.0 utilizes many of the same architectural features that PCIe 6.0 uses, including PAM4 signaling and 256-byte flit mode. PAM4 is key to enabling a data rate of 128 GT/s. Pulse Amplitude Modulation (PAM) enables more bits to be transmitted at the same time on a serial channel. In PCIe 7.0, this translates to 2 bits per clock cycle for 4 amplitude levels (00, 01, 10, 11) vs. PCIe 5.0, and earlier generations, which used NRZ with 1 bit per clock cycle and two amplitude levels (0, 1).

PCIe 7.0 also continues to prioritize data security and integrity, an extremely important consideration given the immense value surrounding AI data. Features such as Integrity Data Encryption (IDE) and Trusted Execution Environment Device Interface Security Protocol (TDISP) ensure that data transmitted across the entire PCIe network remains secure from the server host to endpoint devices, whether it’s going through switches, retimers, etc. TDISP, in conjunction with IDE, defines how the selective streams and respective encryption keys ensure the secure management of the end-to-end interconnect between end point and host systems, which provides security across the entire PCIe network. This is essential to enable the secure communication of PCIe connectivity in a disaggregated data center.

PCIe 7.0 features will be key to supporting the next generation of computing advancements, driving forward the industry’s ability to support the data connectivity and processing capabilities needed for the AI era. To help customers transition to the latest generation of PCIe, Rambus has recently announced a new portfolio of PCIe 7.0 IP solutions. This portfolio includes a PCIe 7.0 Controller, host or endpoint, a PCIe 7.0 Switch IP and a PCIe 7.0 Retimer IP. These products are designed to support a wide range of applications, from high-performance computing clusters to edge computing devices.



Leave a Reply


(Note: This name will be displayed publicly)