100G Ethernet IP For Edge Computing

Finding the speed sweet spot to keep the price point of edge deployments manageable.

popularity

The presence of Ethernet in our lives has paved the way for the emergence of the Internet of Things (IoT). Ethernet has connected everything around us and beyond, from smart homes and businesses, to industries, schools, and governments. This specification is even found in our vehicles, facilitating communication between internal devices. Ethernet has enabled high-performance computing data centers, accelerated industrial processes and commerce, and can be found in households worldwide. Despite the advancements in Ethernet technology, with the rise of 800G Ethernet and the standardization of 1.6T Ethernet, high-speed Ethernet above 100G remains a rarity in edge computing. This article explores how 100G Ethernet enables edge computing and describes applications and design challenges for IP designers.

Speed requirements for edge computing

“The Edge” refers to any source of data that ultimately ends up in a data center or cloud processing paradigm. Examples are cameras and sensors, mobile devices, many types of vehicles, routers and switches, and even smart appliances that have processing and data collection/sharing capabilities. Although it may seem counter intuitive, the edge is both fuzzy and dynamic – if data aggregation or processing is occurring on that perimeter, we are talking about edge computing. The proliferation of edge devices has seen a meteoric growth with machines, sensors and meters, mobile and wearable devices, and the continuous adoption of AI in transportation, home, and metropolitan technologies. According to Vantage Market Research, “Global Edge Computing Market is valued at USD 7.1 Billion in 2021 and is projected to reach a value of USD 49.6 Billion by 2028 at a CAGR (Compound Annual Growth Rate) of 38.2% over the forecast period 2022-2028.” The devices involved may have many form factors and architectures, but let’s look at an individual server as being representative of them.


Fig. 1: Paths to the cloud from the edge.

Servers typically use a shared PCIe bus to attach network interface cards (NICs), and computers using PCIe 3.0 are the first generation with a bus fast enough at 8 GT/s per lane to support 100 G Ethernet adapters using a x16 link (unidirectional 16 GB/s or 128 Gb/s). With PCIe 4.0, an 8-lane slot will support a 100G adapter at full speed. That is a sweet spot for today’s machines because x8 slots are usually available on a PCIe bus. Even with the upcoming generation of PCIe 5.0/CXL 1.1 or 2.0 systems, 100G data rate is a comfortable fit on a shared PCIe bus, unless designers are trying to accelerate parallel computation with maximum bandwidth and minimal latency for inter-process communication (IPC), like designers need for HPC clusters.


Table 1: PCIe speeds as a function of version and lane count (Total BW shown is bidirectional)

Edge devices are generally designed to pre-process, compress, and reduce the amount of data that needs to be transferred upstream. Even if you had the necessary amount of post-processed data to fully utilize 100G data rate at the individual server connection, it all still needs to be aggregated for data center-facing traffic across a concentrating set of routers and switches. Additionally, those architectures couldn’t service too many simultaneous connections at full bandwidth unless they have up-links that are a significant multiple of the individual port speeds. For example, a 32 port 100G Ethernet switch needs to send all that traffic upstream. Link Aggregation Control Protocol (LACP) can be used to aggregate multiple ports for a connection, but even that protocol is limited to eight ports for a given bond. Using LACP with a fixed radius switch quickly drives up the cost of infrastructure and cabling by rapidly reducing the number of downstream connections that device can provide. Wi-Fi connections are all individually well below 1 Gb/s, and even cellular 5G theoretically peaks at 20 Gbps, so 100G at the aggregation layer services those markets well.

Automotive applications rarely need more than 10 to 25G Ethernet within the vehicle but do require many of the optional quality of service (QoS) and time-sensitive networking features not yet found in higher-speed Ethernet specifications. If you share a network between vehicle control systems like brakes and an entertainment system, it is important to prioritize vehicle control even if your kids are watching an engaging video. Time-sensitive networking features, soon to be supported at 100G, enable support for aggregation on industrial floors, audio visual applications, security, health care and even high-end automotive applications on the edge!

Another advantage that 100G Ethernet provides, as opposed to its higher-speed counterparts, is support for all the required and many optional features specified by the IEEE standards such as:

  • All required features of the base IEEE 802.3/802.3ba standard
  • IEEE 802.3 standards for 10/25/40/50/100G Ethernet systems
  • IEEE 802.3br parameters for Interspersing Express Traffic
  • IEEE 802.1 TSN features
  • IEEE 1588 Precision Clock Synchronization Protocol
  • IEEE 802.1-Qav for Audio Video (AV) traffic
  • Energy Efficient Ethernet (EEE) as specified in IEEE 802.3az

100G Ethernet is currently the fastest Ethernet speed that can be sustained over a single lane. The third generation of 100G Ethernet using a single 100 Gb/s lane was published in December 2022 as IEEE 802.3ck, along with 200G and 400G Ethernet using two and four of those lanes respectively, and will be supported as 100GBASE-CR for twinax up to 2m and 100GBASE-KR for electrical backplanes. Using multiple lane architectures, the 100GBASE-ZR standard can support 100G Ethernet more than 80 km over a dense wavelength-division multiplexing (DWDM) system using a single wavelength! For more cost-effective options, a four-lane configuration using 25G NRZ SerDes provides a reliable transport medium.

Security is important for all network environments, but it is particularly critical on the edge where 100G Ethernet fully supports MACsec – aka IEEE 802.1AE. MACsec is a hardware-layer encryption mechanism that protects and secures data by ensuring compliance with privacy laws and preventing data theft. MACsec can also prevent rogue devices from being connected to a network, which is a critical protection for an edge environment that could be both unmanaged and unmonitored. Each connection on an Ethernet network (host to host, host to switch, or switch to switch) will traverse both encrypted and unencrypted traffic if control over that encryption is imposed at higher layers, but once MACsec is enabled for a link, all traffic on that connection will be secured from prying eyes.

Lastly, the cost per port goes up dramatically at the bleeding edges of high-speed Ethernet technology. Adding in the cabling cost for ultra-high-speed Ethernet for edge devices just makes them that much more expensive. These factors conspire to make 100G the perfect top-end match for all but the most cutting-edge computing applications, which in turn has led to the creation of a huge market, both at the consumer and the professional levels, for 100G Ethernet products – switches and routers, NICs and cables, and the competition has help to keep the price point manageable for edge deployments.

If you are developing products like NICs, switches and/or routers for the edge market, Synopsys offers a complete solution for 100G Ethernet IP: MAC, PCS, and a full range of PHY options along with verification IP, software development and IP prototyping kits. Beyond the edge, Synopsys also offers high-speed Ethernet IP up to 800G today and we are working with the various standards groups to enable 1.6T going forward.



Leave a Reply


(Note: This name will be displayed publicly)