Why Wait To Double Your Network Bandwidth?

Why 25 Gigabit Ethernet is the next logical complement to 10GbE interconnect speed in next-generation data center servers.

popularity

10 Gigabit Ethernet (10GbE) has been the economical workhorse high-performance interconnect for data center server networks for years. 40GbE and 100GbE of course are just derivatives of 10GbE, with 4 x 10GbE channels or 10 x 10GbE channels. 10GbE, therefore, has been the building block for server racks everywhere. But with the massive explosion of data from the Cloud, Internet of Things (IoT) and online video streaming, plus the increased throughput of servers and storage solutions supported, companies are continually evaluating the adoption of higher Ethernet speeds to not only keep up with existing bandwidth demands, but to future-proof as well. The question companies are facing is whether to adopt 25GbE, 40GbE or even 100GbE as they replace or add to their current 100BASE-T or 10BASE-T installation. So let’s explore which option is the optimal Ethernet speed for companies looking to balance the cost-performance tradeoffs that typically accompany the transition to higher speeds.

Let’s first examine how the transition to higher speeds usually occurred prior to 25GbE. Using 10 GbE as the standard server and Top-of-Rack (ToR) switch speed, data centers wanting to upgrade to higher link speeds typically aggregated multiple single-lane 10GbE network Physical layers into 40- or 100GbE speeds. For example, a company could aggregate four 10GbE physical lanes to achieve 40GbE speeds or aggregate 10 10GbE lanes to run 100GbE speeds. And subsequent to 10GbE, the speeds next standardized by IEEE were 40GbE and 100GbE, offering companies such alternative approaches to transition to higher speeds. The evolution of high-speed signaling on a single pair of conductors moved from 10Gbps to 25Gbps. This allowed a 100Gbps link to be implemented by bundling four 25Gbps links together. The industry then looked into unbundling 100GbE technology into four independent 25GbE channels, paving the road for IEEE to approve a 25GbE standard in June 2016.

The introduction of 25GbE provided a solution with the benefits of enhanced compute and storage efficiency, delivering 2.5 times more data than 10GbE at a similar long-term cost structure. Now who doesn’t want 2.5 times more bandwidth? While each of 40GbE and 100GbE provides further increase in bandwidth, each comes with a tradeoff, as these solutions are more costly and require more power than 25GbE. On the other hand, relative to a 10GbE solution, a 25GbE solution provides a faster connection with more bandwidth, while balancing the capital and operational expenditures associated with moving to next-generation networks. For example, data centers can obtain a performance boost while still leveraging the currently deployed optical fiber. So to keep up with consumer demand and next-generation applications, data centers are turning to higher bandwidth alternatives as needed to co-exist with their more pervasive and economical 1GbE and 10GbE platforms. (See “Ethernet Darwinism: The survival of the fittest – or the fastest” for more on this trend.)

If the servers of a data center are unable to communicate with each other or their end users at a high capacity, that data center is likely not maximizing utilization and giving customers the best user experience possible. Migrating to 25GbE not only provides companies with a huge jump in capacity, but also makes it easy for companies to quickly and cost-effectively upgrade to even greater speeds as needed. With 25GbE, companies can run two 25GbE channels to achieve 50GbE or four channels to attain 100GbE, making the migration to 25GbE future-proof. And, moreover, 25GbE is also backward compatible and can talk with 10GbE, so that the network servers can still operate with legacy network installations as new ones are added.

Additionally, 25GbE takes advantage of movement at the switch level. Data not only flows from the server to the core, but also between servers creating East-West centric traffic. Leveraging CLOS networks—a fully connected mesh of access switches tied to the servers and spine switches of a network—25GbE is optimized to support this East-West centric traffic. By taking individual 25GbE lanes and spreading them out to spine switches, networks can achieve more connectivity and no longer be limited to 100GbE ports.

With all these advantages driving next-generation data transition to 25GbE, data center operators are looking to semiconductor companies to create solutions that are optimized for 25GbE. One such example is Marvell’s recently announced new switches and Ethernet transceivers. Marvell’s newest Prestera 98CX84xx family of 25GbE switches and Alaska C 88X5123 and 88X5113 Ethernet transceivers, can help data centers cost effectively transition to 25GbE speeds.

With these types of new solutions available, it makes 25GbE an ideal choice for organizations seeking faster, smarter next-generation connections to help keep up, at least for now, with the insatiable need for more bandwidth using existing fiber infrastructure. So there’s no need to wait to double bandwidth right now. You can add 25GbE to keep up with today’s latest bandwidth-sucking applications, without deserting your economical older generation Ethernet base.



1 comments

ForOne Light says:

I am a firm believer that the union of the switch and the server will happen and the ToR implementation will no longer be viable due to bus contention. This will allow point to point traffic in every dimension with reconfigurability without changing cabling. This of course will require a new cabling paradigm. In such a configuration many a few ports on each server with low cost fiber cables having 12-64 25G fibers. Not all fibers will be lit but on demand each server /switch will be able to light fibers in and of the dimensions. This will allow smart meshed networking that is self aware and self tuning. Smart AI networks will become the future as servers shrink.

In the next 3-5 years ReRAM will be placed on Server Processor IC’s which will run at SoC core Speeds with independent Multibus architectures on the SoC. Each core will form and individual compute node. No longer will the Von Neumann Bus be the bane of the compute industry. This will allow the shrink of the 1U Server down to the size of 1/2 the size of a credit card or an effective cabinet shrink of 100:1. This of course obviates why I have predicted the new networking paradigm above.

Leave a Reply


(Note: This name will be displayed publicly)