中文 English

Data Center Scaling Requires New Interface Architectures

Moving optics closer to silicon to overcome the growing obstacle of power.

popularity

You can pick your favorite data points, but the bottom line is global data traffic is growing at an exponential rate driven by a confluence of megatrends. 5G networks are making possible billions of AI-powered IoT devices untethered from wired networks. Machine learning’s voracious appetite for enormous data sets is skyrocketing. Data intensive video streaming for both entertainment and business applications continues to accelerate. ADAS and autonomous vehicles add another torrent of data. Nowhere is the impact of all this growth being felt more intensely than in the data centers at the heart of the global network.

Ethernet networks interconnect the switch, server and storage devices of the data center, and are scaling rapidly to meet the tidal wave of data. Ethernet’s evolution kicked into high gear with the 25 Gigabit Ethernet Consortium’s introduction of 25G and 50G standards in July 2014. Flash forward to 2020, and the re-branded Ethernet Technology Consortium has announced the 800G Ethernet standard in April of this year. That’s a 16X increase in six years from the 50G milestone, or more than 2.5X increase every two years. 1.6T Ethernet should be on pace to make its appearance in late 2022 or early 2023.

At the silicon-level, scaling to smaller process nodes has enabled switch ASICs to advance from 12.8 Terabits per second (Tbps), to 25.6T, to the upcoming generation 51.2T. But there’s a critical issue that arises with the end of Dennard scaling: power. Thanks to Dennard scaling, die shrinks that enabled a doubling of bandwidth, a doubling of transistor density, and a halving of per transistor power, allowed for power per area to remain constant. But post-Dennard, power consumption becomes a dominant factor in system architectural design as power per area rises even with process node shrinks.

The 12.8T switch ASICs used 25G and 50G medium reach (MR) and long reach (LR) SerDes for 25G, 50G and 100G Ethernet links. As switch ASICs migrate to 25.6T ASICs and 400G ports, 512 SerDes running at 50G are needed to move data on and off chip. That number of MR and LR SerDes burn too much power to be practical. Architecturally, this motivates moving from the electrical domain to the optical domain to keep the power budget in check.

In response, the SerDes for 25.6T ASICs transition to 50G very short reach (VSR) interfaces to link the silicon with on-board or pluggable optical modules. Eight 50G lanes are aggregated for a 400G Ethernet connection. The power savings afforded with the simpler VSR architecture, eliminating much of the DFE circuitry required for an MR/LR class SerDes, is substantial. But with the move to 51.2T ASICS and 800G Ethernet, it most likely won’t be enough.

In the 51.2T ASICs, the SerDes need to scale to 100G in order to fit them all on chip. As everything has doubled, 512 SerDes running at 100G are needed for the bandwidth. These can be aggregated for 64 links of 800G Ethernet. As signal losses rise with data rate, the 100G VSR circuitry will be more complex and burn more power than the 50G variety. Thus, the architecture will need to evolve to one which moves the optics even closer to the silicon.

Here’s where co-packaged optics (CPO) enters the picture. By integrating 800G optical engines in the same package as our switch ASIC, we can drop the size, complexity and power of the 100G links to extra short reach (XSR) requirements. Using 100G XSR links running at less than 1 picojoule per bit, we can reduce the I/O power by more than 80% and the switch ASIC thermal design power (TDP) by more than 25% compared to an MR/LR SerDes running at the same speed. This gives us the means to scale to the 51.2T ASICs needed for the next generation of network switches.

To sum up, with the end of Dennard Scaling, architectural innovation is the key to advancing the performance of network devices. Moving optics closer to the silicon, with optical modules and then with CPO, enables us to overcome the growing obstacle of power. 112G XSR SerDes and CPO make possible network devices with 800G and then 1.6T Ethernet links to handle the growing flood of data traffic.

Additional Information
Website: Data Center and Networking
Website: 5G/Edge
Website: Rambus 112G XSR SerDes PHY
Website: Rambus 800G MACsec



Leave a Reply


(Note: This name will be displayed publicly)