The SerDes – Terabit Ethernet Connection

Ethernet is moving faster than ever, presenting a distinct set of challenges for SerDes designers.


400 Gigabit Ethernet (400GbE) and 200 Gigabit Ethernet (200GbE) are currently slated for official release by the IEEE P802.3cd Task Force in December 2017. Although there is not yet an official IEEE roadmap detailing what lies beyond 400GbE, doubling to 800GbE will likely become a reality when single-lane 112Gbps links hits the market. This technology will allow for larger lane bundles, providing 1 TbE or 1.6 TbE link with 10 or 16 lanes, respectively.

As Andreas Bechtolshiem, chairman of Arista Networks, noted earlier this year, network bandwidth has long been the bottleneck for companies such as Amazon, Facebook and Google. According to Bechtolshiem, the three companies are moving to 100GbE connections in 2017 and will start buying 400GbE systems by 2019. It is therefore important for the current generation of 56Gbps SerDes PHYs to meet the long-reach backplane requirements for the industry transition to 400GbE Ethernet applications. This allows the SerDes PHY to scale to speeds as fast as 112Gbps, which are required in the networking and enterprise segments, such as enterprise server racks that are moving from 100GbE to 400GbE and beyond.

Ethernet is clearly moving faster than ever, due to the rise of Big Data, the Internet of Things (IoT) and other trends that place increasingly high demands on communication channels. In fact, there is already a forum for 112Gbps SerDes which will be critical in driving the 800GbE standard forward. Perhaps not surprisingly, supporting faster Ethernet speeds presents a distinct set of challenges for SerDes designers. In addition to maintaining signal integrity, engineers must contend with a strict set of design requirements, such as architecting next-generation silicon within the same power envelope – without the support of Dennard Scaling.

Although lower process nodes facilitated by Moore’s Law are obviously beneficial, there are numerous design tradeoffs, specifically around area and power, that should be carefully considered. To be sure, one of the biggest costs involved in running a data center is electricity consumption, so there has always been a significant emphasis on keeping the power low for such links – even though they are targeted for higher speeds.

The ideal paradigm for a high-speed, next-generation SerDes that supports Terabit Ethernet is one that maximizes performance with the lowest power draw and smallest area. However, a more realistic scenario would see compromises on performance, power and area, with trade-offs based on specific product requirements. For example, engineers may want to consider designing a lower reach SerDes that is cheaper on power/area and uses additional components on board such as re-timers to stay within the same budget. In addition, the channel can be made smooth enough to take off some SerDes equalization to maximize power and area efficiency. Moreover, extra PLL/VCO’s can be eliminated, while the merits and extent of backwards compatibility should be assessed.

In conclusion, Ethernet speeds are fast approaching 800GbE and beyond. Since SerDes PHYs will be playing an important part in the development of terabit Ethernet, system architects should carefully consider the pros and cons of design tradeoffs around area and power. Deciding on a specific set of tradeoffs for high-speed serial links may very well be one of the most stressful parts of the entire design process, although it is critical for maximizing operational efficiency at 400 GbE and beyond.