Meeting 112 SerDes Based System Design Challenges

Space limitations, co-packed optics, and signal integrity in Ethernet switches for hyperscale data centers.

popularity

The need for higher bandwidth networking equipment as well as connectivity in the cloud and hyperscale data centers is driving the switch technology transition from 25Tb/s (terabytes) to 51Tb/s and soon to 100Tb/s. The industry has chosen Ethernet to drive the switch market, using 112G SerDes or PHY technology today and 224G SerDes in the future. This article describes how designers can overcome design challenges, such as power, area, packaging, signal integrity, power integrity, and 800G Ethernet implementation for HPC systems using 112G Ethernet PHY IP.

Design challenges

Area and power

Reducing power and area while transitioning to more advanced process technologies from 7nm to 5nm to 3nm becomes a key focus as the use of lower power modulation, such as PAM-4, and high speed SerDes technology, such as 112 Ethernet PHY, increases. In addition, there are die size limitations due to yield issues. Denser integration of components in the Ethernet switch SoC is required to maintain the same size since server boxes and compute boxes must fit in the same chassis in the rack units, as seen in figure 1.

Fig. 1: Space limitation in the server racks in data center and ToR switch SoCs.

However, such dense integration of SoC components is increasing power and requiring expensive cooling systems. All of these make area, power, and latency key metrics or challenges for high-density switches. They also impact performance as SoCs for switches incorporate hundreds of lanes, making system performance more important than a single SerDes performance.

Transition to co-packaged optics

Data center optics is also evolving to support the higher bandwidth networking demands. Both optics and ASICs have to address area, power, and latency challenges within the switch-optic interconnects and minimize the switch-optic electrical I/O power consumption. Figure 2 shows the evolution of power in pluggable optics, which is the technology of choice today.

Fig. 2: Optics power per bit is decreasing significantly. Source: Market Focus: The Road to 800G and Beyond – Arista Networks

Various SerDes architecture – very short reach (VSR) and direct drive (eliminating DSP) – are addressing the power challenge in switches and optical modules. In next generation data centers, ultra-high-speed pluggable optics with VSR PHYs at the host side will consume less power than medium or long reach PHYs. For that reason, the idea of co-packaged optics (CPO) with VSR PHYs (consuming 2.5-3 pJ/b), placed close to the switch SoC, is evolving. Proof of concept for CPOs is at 12Tb/s or 25Tb/s is available today with 51Tb/s in the pilot phase and 100Tb/s expected in volume deployment soon. Long reach PHYs on the switch interface – either co-packaged or directly driving the optical components – can also reduce power by eliminating retimer. An emerging technology for optics connectivity is 2.5D/3D silicon photonics, which is enabling a range of optical modules from high-density pluggable (OSFP-XD) to CPOs. SerDes IP providers are engaged with the ecosystem to continue addressing the power challenge.

Signal integrity

Minimizing risk factors that impact time-to-market is a critical goal for SoC designers. Among some of the risk factors are overcoming system signal integrity challenges. The high-speed signals at 100Gbps must have the smallest amount of crosstalk (xtalk) impact on one another, while escaping the die edges. Adding package layers is one of the solutions, but it amounts to a higher cost. To meet the high-speed SerDes xtalk specification while minimizing the escape layer numbers and beachfront size, designers must optimize the high-speed signal path routing through the package. Package designers and signal integrity experts must collaborate with SerDes designers to create the SerDes bump map and perform routing study and high-frequency simulation to validate the xtalk specification. 51Tb/s switches and AI accelerators will need 112G SerDes or PHYs to be placed in all die edges and in multiple stacks due to die size limitation. Performing package escape studies for north/south (N/S), east/west (E/W) orientation is required since signal escapes are in different directions. Moreover, designers need to consider double stacking of macros. In addition, nearby power and ground plane and their impedance need to be taken into consideration.

Designers must also:

  • Create power distribution network (PDN) with multiple lanes of SerDes (512 lanes for 51Tb/s switch) with different supplies (digital and analog)
  • Perform power integrity simulations assuming all PHYs are switching simultaneously in mission mode
  • Validate the supply AC ripple and max./min. DC specification limits for the SerDes using AC PDN analysis and transient simulation
  • Perform PDN design what-if analysis with PDN-sharing RL model
  • Maintain the lowest common part DC resistance on package and PCB
  • Perform IR drop analysis with the package and PCB
  • Maintain the lowest PCB low-pass filter (LPF) DC resistance along with PDN DC resistance

Multiple stacking of macros with limited metal layers might need spacing, or channels between macros and digital logic can be placed in such channels/spacings. SoC implementers need to provide robust power structure and deliver adequate power on the channels to minimize any IR drop issues. IR drop analysis for the full chip, early in the design phase, will indicate any weak power grids in the channel. Any change in power structure and digital logic placement due to IR drop fixes can impact design partitioning and might also change the chip floorplan. Hence early analysis is important to reduce any schedule impact.

Ethernet MAC, PCS, PHY implementation

400G and 800G Ethernet implementation will need multiple PCSs, MACs and PHYs. The SoC designer can implement dies with or without macro stacking, considering die edge limitation and core area limitation. These die tiles can be N/S and E/W orientation-dependent or agnostic. A single die for both orientation is possible with efficient block partitioning. What-if analysis with block partitioning and optimized individual block sizes can provide the flexibility to reuse the blocks around all edges of the die. Design improvements, such as design pipelining if the blocks are far apart without compromising latency, can be made if timing issues are found in the early design phase. Figure 3 illustrates a single 800G Ethernet die tile implementation.

Fig. 3: Gaps between the top X4 macro and bottom flipped X4 macro appear if PCS and MAC are placed in between them for timing closure flexibility.

The above implementation might not be feasible for high-speed signal escapes on the north and south die edges. Various floorplan tryouts that need months of trial and error, such as placing the individual blocks in the required channel and minimizing the core die area, contribute to schedule delays. Top-down approach with a specified bounding box is becoming essential due to 100s of lanes in a design and limited die area and beachfront. Tile-like implementation can ensure reusability and seamless integration in all die edges.

The path forward

112G SerDes or PHY is driving the next generation of compute, storage and networking innovations in cloud data centers for high-performance computing and AI/ML. Ethernet switch SoC designers implementing 112G SerDes or PHY technology must consider a slew of critical metrics or challenges, such as power, area, latency, die stacking, signal integrity, power integrity, and implementation, all of which are tasks that add to designers’ already short design schedules.

With the silicon-proven, PAM-4 112G Ethernet PHY in advanced FinFET nodes, along with PCS, MAC, and AI/ML-driven EDA tools, Synopsys is enabling SoC designers to achieve the best power, performance, area and latency, while addressing system reliability, power integrity and signal integrity.

Synopsys has performed all the required work, such as package escape studies, PHY, SRAMs, PCS and MAC placement optimization including partitioning and floorplanning, pin placement, place and route, timing closure, and signoff electromigration/IR drop analysis, helping users successfully tape-out large-scale SoCs with hundreds of lanes of 112G SerDes instantiations. Synopsys can deliver such a comprehensive solution by leveraging our logic libraries, memory compilers, EDA tools, system solutions such as 3DIC, integrated 3rd-party tools like Apache/Redhawk, and close collaboration with PHY, MAC, PCS designers, as well as implementation and system experts. Synopsys provides integration-friendly deliverables for 112G Ethernet PHY, PCS and MAC with expert-level support which can make customers’ life easier by reducing design cycles and helping to bring products to market faster.



Leave a Reply


(Note: This name will be displayed publicly)