中文 English

The Evolution Of Ethernet To 800G And MACsec Encryption

Keeping the benefits of Ethernet’s flexibility while protecting data in motion.

popularity

Ethernet is a frame-based data communication technology that employs variable-sized frames to carry a data payload. This contrasts with long-haul Optical Transport Networks (OTN) that use fixed-sized frames. The size of Ethernet frames ranges from a minimum of 60 bytes up to 1500 bytes, and in case of jumbo frames, up to 9K bytes. A frame is a Layer 2 data container with the physical addresses of the ports; a packet is a Layer 3 encapsulation of an upper layer protocol/data with the necessary routing information to send the data to any IP address. Packet size is determined by the application as well as TCP/IP stack configuration and may vary significantly depending on target deployment (streaming, storage, telecom, etc.).

Variability in the frame size comes with extra complexity. Ethernet frame transfer requires additional overhead for the frame start/end synchronization beside data error checking (CRC). Given the additional complexity, why use variable frames? In a word, flexibility. Flexibility has allowed Ethernet to take on innumerable network configurations, sizes and use cases. Since its invention by Robert Metcalfe in 1973, Ethernet has progressed from its original 2.94 Mbps data rate to its latest incarnation of 800 Gbps, a 272,000X speed increase. Today, Ethernet is the ubiquitous networking technology spanning the desktop to the network core.

Ethernet’s journey to modernity

In its early days, Ethernet was a shared network architecture with devices on the network listening before they talked (carrier sense) and a back-off mechanism if multiple devices talked at the same time (collision detect). Ethernet’s Carrier Sense Multiple Access with Collision Detect (CSMA/CD) protocol was a foundational innovation, but the shared structure meant that Ethernet networks couldn’t deliver full rate bandwidth between devices. Modern Ethernet equipment, on the other hand, is designed to be non-blocking or as close as possible to that. Non-blocking means that the internal bandwidth in a switch or router can handle all its ports’ bandwidth. This means almost full line rate can be achieved when connecting two devices. This is an essential property that drives the modern data center, enterprise and telecom performance that we experience daily.

In the past decade, the industry started rolling out 40G, then quickly transitioned to 100G and is now ramping up the deployment of 400G equipment. The move to 400G Ethernet was a complex one, given the goals set by the key hyperscale data center operators. 400G is required not only to deliver its advertised throughput, but it must also bring a new level of efficiency to be realized within the small pluggable optical modules. As such, the 400G evolution has a number of phases driven by availability of advanced silicon and photonics technologies. It started with eight 56G SerDes aggregated to achieve a 400G link, and is currently moving to four 112G SerDes. The advent of 112G SerDes allows building efficient 800G ports (with eight 112G SerDes lanes) as envisioned in the recently announced 800G Ethernet Specification by Ethernet Technology Consortium.

Protecting data in motion with MACsec

MACsec is a data protection protocol that is widely deployed to protect high-speed communication over Ethernet networks. To achieve full line rate for modern applications (ranging from 10 to 800 Gbps), MACsec must be fully implemented in the hardware, typically close to Ethernet port of the device. That can be inside the Ethernet network card (NIC), Ethernet PHY ASIC or being integrated into the switch ASIC. This means for all situations (either back-to-back small packet rates or bulk stream of large packets), data in motion must be protected with MACsec at full rate without any compromise. For example, 800G Ethernet for small packets can transfer up to 1.19 billion packets per second.

For data protection, MACsec deploys AES-GCM, a high-speed and scalable encryption/decryption cipher algorithm with integrity protection. Despite being scalable, it is still a block cipher similar to other NIST-approved data protection ciphers. The data to be protected should be split into 16-byte blocks, and then fed into the cipher for encryption/decryption and integrity protection. The non-complete blocks are padded with zero bytes before cryptographic processing. Because each Ethernet frame is variable size, there are a certain number of cryptographic modules required to make sure an Ethernet frame of any size can be protected in all situations.

Having said that, the raw throughput of such implementation could reach more than a terabit per second (Tbps) of performance. The actual measure of the throughput is simple; it needs to match the Ethernet port parameters:

  • Maximum packet rate of small packets
  • Full line rate across all frame sizes
  • Ability to accept packets back-to-back in all situations, regardless of software interventions to manage the secure connection and collect the telemetry.

The recently announced Rambus 800G MACsec core delivers full line-rate throughput in all situations. In certain configurations and process technologies, it can reach a blazing 1.8 Tbps of raw cryptographic throughput. Such a stretch of capabilities has limited practical application today but speaks to the scalability of the Rambus solution. A more important priority is that a MACsec implementation is optimized in silicon, and as noted above, delivers line rate capabilities.  With the Rambus 800G MACsec solution, network chip designers can implement area and power efficient Layer 2 security while keeping all the benefits of performance and flexibility of 800G Ethernet.

Additional Resources:
Blog: MACsec Explained: From A to Z
White Paper: MACsec Fundamentals
Website: Rambus 800G Multi-Channel MACsec Engine
Product Brief: Rambus 800G MACsec



Leave a Reply


(Note: This name will be displayed publicly)