Why an old technology is still very much in demand.
An ever-growing engagement with the Internet — where most of humanity and the ‘things’ we use are almost constantly connected and constantly storing, processing and retrieving data over a network — is increasing pressure to develop new standards, and much more quickly.
Witness the timeline of Ethernet, and its humble beginnings as a standard protocol for moving data at 2.5 megabits per second inside a local-area network.
“It took Ethernet about 25 years to come up with six standard speeds. We did that from around 1985 to 2010,” says John D’Ambrosia, chairman of the Ethernet Alliance and senior principal engineer at FutureWei Technology, the U.S.-based subsidiary of China’s Huawei. “Now, from 2015 to 2018, we will introduce six more.”
The 400 Gigabit Ethernet standard is set to be ratified this year. D’Ambrosia has been a participant, and then leader, in IEEE 802.3xx study groups and committees for wired Ethernet for the past couple decades. In the last five years, he has witnessed a growing debate over standards activity as cloud giants like Facebook and Google came to the hardware communities with enormous challenges and huge budgets.
“When you are trying to deploy 100,000 servers at once, things add up in a big way in a hurry,” D’Ambrosia says. “There was about a two-year period there where every single meeting would have something totally new. We’d say, ‘What’s going to come at us next?’ It was bedlam.”
Big data has diverged along two distinct paths since then.
“One part is moving from 25 to 50, 100 and 400 [gigabits per second],” says Venu Balasubramonian, marketing director at Marvell. “In mid-2016 they were at 10 Gbps. They’re now moving to 40 Gbps, with 100 Gbps on the aggregation layer. They will be running 100 Gbps to the spine, or 50 Gbps in single line. That’s one piece of the market. The other piece is the enterprise, which right now served almost entirely by copper. They’re running 10 Gbps.”
Large cloud operations, meanwhile, have added photonics to move data back and forth between server racks and external storage, but the difference in cost is significant. The enterprise includes companies such as midsize banks, where the speed of moving data is not as critical. In addition, the amount of data that need to be moved is smaller.
That’s still only a portion of the market for Ethernet, which ironically was considered a dying technology when wireless technology first began rolling out across corporate enterprises. The initial argument was that it was easier to add wireless routers than pull wire through the ceiling or floors of buildings. But wireless has its own set of issues, including security and interference.
So rather than fading away, the market for Ethernet is growing again, even for new technologies such as 5G access points, in driver-assisted/autonomous vehicles, and within connected industrial operations.
“With automotive there are more electronics, and bandwidth needs are growing significantly,” says Balasubramonian. “There are new standards being developed and a whole range of bandwidth options. You may see 100 Gbps. There is even talk about 2.5G in cars for video transmission from cameras.”
In fact, Ethernet is almost tailor-made for automotive. It’s been well-tested over years of mission-critical use and abuse, and it’s much lighter in weight than automotive wiring harnesses.
“The harness itself is a pain point from the standpoint of weight and cost,” says Jeff Hutton, senior director of the automotive business unit at Synopsys. “With Ethernet, you can get away from the CAN (controller area network) bus and LIN (local interconnect network) bus, which makes moving data more efficient. It also adds reliability and security, because these older networks are where a lot of hacks are coming in through.”
Fig. 1: Ethernet in cars. Source: Avnet/Marvell
The SerDes connection
For many applications Ethernet is often viewed as an extension of the serializer/deserializer (SerDes) technology, which is at the heart of the chips within the servers, switches and routers to move around large quantities of data.
SerDes devices consist of two functional blocks, which convert data from parallel to serial and back again. SerDes is away around the pin limitations on a chip, because there are only so many pins available, and it is an essential component in Gigabit Ethernet systems. As a result, SerDes has played a significant role as an enabler in the debate over Ethernet standards.
“In November 2006, 100 standards group decided to do 100Gig,” says D’Ambrosia. “Next meeting, the server community got up in arms saying, ‘We don’t want 100 gig!’ They wanted to do 40 gig. That aligned with what they thought would be next. This is around early 2007. Cisco decides at the very first meeting they won’t come along. They wanted to get to 100Gig by doing four lanes of 25 Gig, so we would need a 25 Gig standard.”
Not everyone wanted to go that way, however.
“There ended up being two camps,” says Greg McSorley, technical business development manager for Amphenol, who has been involved in Ethernet and storage standards efforts for the last 20 years. “The telco and long range guys wanted 100Gig, but the data center and server guys who had to worry about the NICs on the back of every server, and all the top-of-rack switches wanted 40. You should have seen some of the e-mails.”
Fast forward a few years, and two tracks emerge, so that electrical interfaces could be utilized for 10 Gbps and multiple lanes of it, and optical interfaces would work with 25 Gbps and its multiples. “Now what we say is, ‘follow the SerDes,'” D’Ambrosia says. “To put into a rule of thumb perspective, for 10 gig and 25 gig, you are using NRZ [non-return-to-zero] signaling. For 50 gig and above, it’s PAM4 [pulse-amplitude modulation ].”
SerDes and Ethernet also rely on copper, which has been the material of choice in semiconductors. It is relatively inexpensive, well-tested, a better and more a stable conductor than other materials such as aluminum, and the connection on a chip can be manufactured using standard semiconductor processes.
“If copper can do it, you use a copper link,” says Marvell’s Balasubramonian. “This is why we’re seeing such high growth for high-speed Ethernet. “The demand for compute is insatiable now. But the enterprise has been using 1 Gbps for the past 10 years. The economics of 10 Gbps are very attractive. There also are a lot of access features feeding into access points, so the number of devices is increasing. Ten years ago it was a laptop or a desktop. Now you have a laptop, a phone and a tablet.”
The importance of standards
Extending Ethernet creates a large emphasis on standards, because without them there is no interoperability or backward compatibility. But the time it takes to get to consensus can drive some companies and teams to collaborate outside the standards group first, or to just strike multi-supplier agreements (MSAs) among a set of vendors that lay out what engineering specifications should be agreed upon.
Rick Kutcipal, who works in product planning and architecture for Broadcom in Fort Collins, CO, has seen that happen and work for the Serial Attached SCSI (SAS) and Serial AT Attachment (SATA) storage interface standards.
“People will work together offline, so that when we go to the standards bodies it’s not hitting everyone flat-footed,” says Kutcipal.
Working together offline and reaching consensus sooner compresses the standards-making timeline. Market consolidation helps speed up that process as well. “Twenty years ago, you had 20 disk drive guys,” says Kutcipal, “Now you have three.”
What comes after the standards review, publication, debate and ultimate ratification, is equally as important – ensuring interoperability. At the center of interoperability sits the University of New Hampshire Interoperability Lab, or IOL. The UNH IOL holds “plugfests,” which let all manner of alliances and standards groups come together and see if the equipment they have built to a given standard will work together.
A plugfest can take as long as a week because vendors have to bring their equipment, light up their oscilloscopes, and generally have time and room to get the gear working and read the test data that comes out of it so they have a chance to debug.
Conclusion
Despite the commercial rollout of a number of communications technologies, including wireless and fiber optics, Ethernet’s footprint is growing. It may not be the fastest or most convenient technology on the market, but it solves enough problems well enough that it is finding a home in new markets as well as existing ones. Power over Ethernet combines power through the Ethernet cable rather than using a separate power cable. And 400 Gbps will provide enough headroom in communication bandwidth even for some of the larger data centers.
But the real key is just how many markets this technology can continue to serve, which is why there is such a flurry of activity around standards. The long-predicted phase-out of Ethernet never happened. So now the question is what else can it be used for, and in how many new markets will it find a home. The answers may surprise everyone.
—Ed Sperling contributed to this report.
Related Stories
Tech Talk: Ethernet
The economics of rightsizing communications in the enterprise.
Leave a Reply