PCIe 5.0: A Key Interface Solution For The Evolving Data Center

Rapidly rising data traffic and ever-greater bandwidth requirements drive the need for new interfaces in the data center.

popularity

A great many developments are shaping the evolution of the data center. Enterprise workloads are increasingly shifting to the cloud, whether these be hosted or colocation implementations. The nature of workload traffic is changing such that data centers are architected to manage greater east-west (within the data center) communication. New workloads, with AI/ML (artificial intelligence/machine learning) being first and foremost, are moving the focus of virtualization from one server running many processes, to the harnessing of many processors to tackle single, massive workloads. Across all these developments, a consistent trend is rapidly rising data traffic and the need for ever greater bandwidth.

PCI Express 5.0 (PCIe 5.0), as the latest generation of the PCIe standard, is one of the key interfaces that will enable the continued advancement of high-speed computing and processing in the data center. Critically, its bandwidth performance provides the necessary speed of connection between the network interfaces of servers and switches. It is also the key interface connection between CPUs and AI accelerators. Further, more storage is moving away from SAS/SATA and towards non-volatile memory express (NVMe) implemented over PCIe.

PCIe 5.0’s predecessor, PCIe 4.0, was first announced in November of 2011 with the final 4.0 specification released in June of 2017.  Offering a top speed of 16 Gigabits per second (Gbps), in a x16 implementation PCIe 4 .0 can deliver full duplex aggregate bandwidth of 64 Gigabytes per second (GB/s). But in a world of exponentially rising data traffic, that’s performance that is already falling behind the power curve. With server network interfaces transitioning from 100 Gigabit Ethernet (GbE) to 400 GbE in the not-so-distant future, 64 GB/s just isn’t enough.

PCIe 5.0 doubles the data rate to 32 Gbps and the resultant full duplex bandwidth of a x16 interface to 128 GB/s, sufficient for 400 GbE links. A 400 GbE link operating at full duplex requires 800 Gbps of bandwidth. Converting to bytes, that’s 100 GB/s of aggregate bandwidth needed which x16 PCIe 5 can support within its performance envelope. But, of course, the demand for bandwidth is insatiable and 800 GbE announced earlier this year will require another speed upgrade. The PCI-SIG is committed to a 2-year cadence of new generations to advance the performance of the standard in support of that need.

Network bandwidth isn’t the only catalyst driving the adoption of PCIe 5.0. The rapid shift in processing workloads, with AI/ML leading the charge, is having a profound impact. For such advanced AI/ML workloads, parallel processing is needed for enormous datasets requiring heterogeneous computing. Specifically, it requires massively parallel architectures, which is why these workloads are being offloaded from the main CPU to a co-processor (AI accelerator), whether it be a GPU, FPGA, or even a purpose-built ASIC. In turn, heterogeneous computing puts a critical demand for bandwidth on the link between CPUs and AI accelerators – a PCIe 5.0 link in the next generation of AI/ML hardware.

If doubling the speed of the PCIe link only entailed a doubling in the complexity of implementation, that would be a pretty good deal. Unfortunately, complexity rises at a higher non-linear rate with speed, driven in no small part by the growing number of signal and power integrity issues that emerge. Working with an interface IP vendor such as Rambus, designers have access to a high-performance PCIe 5.0 solution that benefits from over 30 years of high-speed signaling expertise and over a decade of implementing PCIe solutions.

Another major area of design complexity is bridging the transition from the mixed signal to the digital domain. An integrated interface solution of PCIe 5 PHY and digital controller greatly simplifies this challenge for chip designers. Such integrated solutions include Rambus’ PCIe 5 interface, with validated PHY and controller, and provided with a complete reference design and test bench for ease of use. It’s specification compliant-backward compatible with PCIe 4/3/2/1, supports root port, endpoint and dual mode implementations and offers optional scatter-gather DMA support.

PCIe 5 is among the key interface technologies needed for the continued advancement of computing and networking performance needed in the next generation of data centers. With PCIe 5.0 interface solutions, designers can depend on a robust, high-performance platform for the implementation of their new PCIe Gen 5 ASICs.

Additional Resources:
Video: PCIe 5 Drill-Down with Rambus: Part 1
Video: PCIe 5 Drill-Down with Rambus: Part 2
Video: PCIe 5 Drill-Down with Rambus: Part 3
Web: Rambus PCIe 5.0 SerDes PHY
Web: Rambus PCIe 5.0 Digital Controller
Product Brief: Rambus PCIe 5.0 SerDes PHY
Product Brief: Rambus PCIe 5.0 Digital Controller
Solution Brief: Rambus PCIe 5.0 SerDes Interface Solution



Leave a Reply


(Note: This name will be displayed publicly)