While PCIe 4.0 took a long time to get here, there are big benefits ahead.
PCI Express (Peripheral Component Interconnect Express), also known as PCIe, is a high-speed serial computer expansion bus standard designed to replace older PCI, PCI-X and AGP bus standards. Officially launched in 2003, PCIe was rapidly adopted by chip, system and software designers and emerged as the dominant interface standard for connecting peripherals to the CPU.
Modern CPUs rely on the following primary interconnect types: memory interconnects, primarily supported by DDR4 today; high speed chip-to-chip cache coherent interconnect, typically supported by proprietary standards; and low speed links such as USB and SATA for low-level management and configuration. For almost everything else, there is PCIe. Easily scalable via multi-lane links, PCIe has always been backwards compatible and extensively supported by all modern OSs, software and drivers. Not surprisingly, PCIe has been widely deployed throughout the data center, enterprise and client PC markets.
The advent of cloud computing has demanded that data centers continuously increase compute power by adding faster CPUs with ever increasing numbers of cores. They also adopted newer processing techniques with GPUs and accelerators to service emerging machine learning, artificial intelligence and deep learning workloads. These kinds of use-cases require higher performance processing mated with higher performance storage, all with minimal latency – a paradigm that demands interconnects to optimally feed processing capabilities.
At launch, PCIe 1.0 supported a transfer rate of 2.5 Gbps per lane. Subsequent upgrades released approximately every 3-4 years doubled bandwidth each time (PCIe 2.0 at 5 Gbps in 2006, and PCIe 3.0 at 8 Gpbs in 2010). However, this cadence was abandoned following the rollout of PCIe 3.0. In fact, there was a 7 year gap before speeds reached 16 Gpbs, with the PCIe 4.0 specification publically released in 2017, arguably 4 years late. Multiple CPU, storage, accelerator and network adaptor vendors have already developed PCIe 4.0 compatible products in anticipation of broad deployment. This ramp is imminent with the public release of the specification and extensive ecosystem support.
There are two immediate technologies that will benefit from the proliferation of PCIe 4.0: the adoption of next-generation NVMe storage technologies and the deployment of GPU/FPGA accelerators in the data center.
NVMe is a non-volatile memory interface standard that utilizes PCIe interfaces to SSDs. Emerging NVMe storage technologies are saturating existing PCIe 3.0 interfaces and will only achieve optimal performance with the increased bandwidth proffered by PCIe 4.0.
Meanwhile, accelerator hardware such as GPUs and FPGAs, which are already resorting to proprietary interconnects to address bandwidth bottlenecks, will be able to leverage a high-performance industry standard interface resulting in seamless compatibility across platforms. Moreover, complementary standard initiatives such as CCIX are purporting to add cache coherence capability, required for efficient accelerator offload, to the ubiquitous PCIe transport layer.
PCI-SIG, the standards body that governs PCIe, is accelerating the development and release of the PCIe 5.0 specification, likely to address unabated market demand and the delay in ratifying PCIe 4.0. PCI-SIG has also developed a cabled technology called OCuLink to connect PCIe devices, thereby enabling new out-of-the-box compute and storage use-cases.
Innovations such as these guarantee PCIe’s relevance to compute and storage infrastructure and cement its continuing role in defining the data center of the future.
The PCIe expansion bus standard does not stop evolving. I’m impressed with the evolution of computing.