Wrestling With High-Speed SerDes

Higher performance helps smooth the gap between analog and digital, but it adds a number of new twists.


SerDes has emerged as the primary solution in chips where there is a need for fast data movement and limited I/O, but this technology is becoming significantly more challenging to work with as speeds continue to rise to offset the massive increase in data.

A Serializer/Deserializer is used to convert parallel data into serial data, allowing designers to speed up data communication without having to increase the number of pins. But as the volume of data increases, and as more devices are connected to the Internet and ultimately the cloud, there is a growing need to move more data much faster. This, in turn, has made SerDes design increasingly complicated.

“SerDes is the perfect storm of analog precision and analog circuitry,” said Mick Tegethoff, director of product management at Mentor, a Siemens Business. “We have worked with customers over different technology generations, and the challenges have become tougher.”

Much of the demand for high-speed SerDes comes from large data centers, where the current state-of-the-art throughput is 100 Gbps.

“There is a more recent push to upgrade from 100 to 400 Gbps, and people already are talking about moving to 800 Gbps support,” said Rita Horner, senior staff technical marketing manager for high-speed SerDes at Synopsys. “That translates to the throughput for the interconnect within the data centers, and also from data center to data center, to enable aggregated, larger mega data centers (aka hyperscale data centers). At the same time, in order to achieve the bandwidth, there are a lot of blocks within data centers being built to include accelerators. Also, machine learning is coming, either in the form of accelerators or dedicated processing units that would be able to process certain different pieces of the whole networking process, or to be able to do cache coherency.”

Machine learning and artificial intelligence applications also are driving a significant amount of storage, where a lot of the processing is at higher speeds, and there is more parallel processing being done. Given the amount of parallel processing, it’s not uncommon for data centers to run out of physical space or CapEx in these intensive types of environments, she observed.

As a result, standards from IEEE and the Optical Internetworking Forum are defining higher and higher data rates on a single lane, which allow data to be aggregated to much larger systems. Then, to move SerDes technology to the next level of performance, one of the major advancements is the adoption of PAM4 signaling above 28Gbps.

“With serial data rates hitting 100-plus Gbps per channel, signal impairments caused by increased bandwidth have prompted the adoption of PAM4, or 4-level pulse amplitude modulation,” said Sunil Bhardwaj, senior director of business operations at Rambus. “Compared to NRZ (non-return to zero), PAM4 cuts the bandwidth for a given data rate in half by transmitting two bits in each symbol. This allows a doubling of the bit rate in the channel without doubling the required bandwidth. As an example, using PAM4 signaling, a 56-Gbps bit rate is transmitted at 28 GBauds and has a Nyquist frequency of 14 GHz. With NRZ signaling, the 56 Gbps bit rate is transmitted at 56 GBauds and has a Nyquist frequency of 28 GHz.”

But there is a tradeoff. “Multiple symbol levels make PAM4 more sensitive to amplitude noise than NRZ,” he explained. “PAM4 introduces 9.6dB of loss compared to NRZ, operating at the same Nyquist frequency. Nevertheless, at these high frequencies, the ability to operate at half the NRZ Nyquist frequency makes PAM4 the lower-loss alternative. As with NRZ, PAM4 signals are affected by jitter, channel loss and inter-symbol interference. In addition, measurements for the three eyes are further complicated by new receiver behavior, such as three slicer thresholds, individual slicer timing skew, equalization, and clock and data recovery. Unsurprisingly, PAM4 signal analysis borrows a great deal from the techniques developed to analyze jitter and noise for NRZ. In addition, a number of NRZ techniques are applicable to PAM4, including differential signaling, clock recovery and equalization for both the transmitter and receiver.”

Another complication is that high-speed designs are increasingly susceptible to electromagnetic crosstalk issues, noted Annapoorna Krishnaswamy, product marketing manager for the semiconductor business unit at ANSYS.

Some of the key factors for why electromagnetic cross coupling issues are becoming significant include:

  • Frequency escalation, where on-chip signal frequencies are exceeding 2 GHz and going well above the 6 GHz range into the mmWave band for 5G applications.
  • Rapidly increasing data rates and use of high-speed interfaces to support data transmission with multiple lanes that are close to each other, which increases the risk of cross talk issues.
  • Higher integration and layout density (SoC). With the integration of high-performance digital cores with sensitive analog and RF building blocks, there are multiple radios integrated into a modern SoC.
  • Small form factor of packaging with extensive use of re-distribution layers (RDL).
  • 2.5D/3D packaging technologies.

With increasing clock speeds, advanced packaging styles and the constant pressure to reduce area, traditional approaches to designing and verifying high speed IC designs are no longer sufficient.

“All aspects of the design — the high-speed signal/clock lines, the detailed power and ground routing, the passive devices (caps and spirals), and even the package layers need to be modeled and verified in detail,” said Krishnaswamy. “Accurate modeling of on-chip parasitics, including self- and mutual-inductance (RLCk) is required to fully capture the electrical behavior from DC up to mmWave frequencies (for 5G applications). This is critical for analyzing the unwanted electromagnetic interference of one signal affecting multiple neighbors, both near and far, due to coupling via the power/ground, substrate or package layers.”

She noted this is why accurately capturing the electromagnetic (EM) phenomena, including current distributions, skin and proximity effects, are essential for mitigating the risks of EM crosstalk induced performance degradation and failure in high-speed and low-power system-on-chip designs. EM-aware design flow helps in reducing overdesign, area and cost while ensuring superior performance, quality and reliability of the design.

Fig. 1: Typical high-speed I/O architecture. Source: Mentor, a Siemens Business

Design challenges
With high-speed SerDes, the challenges are usually around power consumption, clock distribution (analog clock tree), the type of package being used, and the parasitics, noted Martin Hujer, staff engineer at Adesto. “Then, there’s routing on the PCB, support of test modes and test patterns, and fast digital logic. Also, there’s a need for a digital controller complying with the higher level of the serial protocol. All of these challenges must be considered when integrating into a custom chip. Depending on the application and customer requirement, there may be potential for alternative solutions, where you can trade off between one or several high-speed serial lanes and a slower, but still fast, parallel bus.”

At the same time, losses of every kind—jitter, ISI, ringing, cross talk, ground bounce, power supply noise—get more severe as the frequency increases, Rambus’ Bhardwaj said. “As a result, signal integrity (SI) is now an undeniably critical aspect of system architecture. Specialized SI engineers routinely interact with system architects, circuit designers and system engineers throughout the design cycle. In order to meet the needs of today’s high-performance systems, SI models of the entire link must be made, including the transmitter, receiver, clock and channel. In turn, comprehensive link analysis influences a range of design architecture, including equalization, clock, timing calibration, as well as coding and/or error correction.”

Further, package design must also be carefully implemented to address high frequencies and tight electrical performance requirements, he pointed out. “Special attention must be placed on both the high-speed I/O and analog supplies in the package, with the package substrate designed using an electro-magnetic (EM) simulator to verify that the package design meets various requirements, including all cross-talk isolation, impedance and S-parameter, as well as supply inductance.”

At the same time, engineering groups want their IP to have known failure rates and lifetimes for reliable service in automotive and other safety critical applications. “With CMOS and finFET technologies moving steadily to smaller feature sizes, random component variations and systematic offsets in layout become more important,” said Andrew Cole, vice president of business development at Silicon Creations. “We must respond to this with careful examination of circuit reliability failure modes and Monte-Carlo simulations of netlists, made larger by including finer-grain layout parasitic effects. As a result, even with intelligent netlist reduction we’ve seen design verification CPU times increase by more than two orders of magnitude from 28nm to 5nm.”

“In other words, the design challenges of high performance SerDes are similar to design challenges of moving to finFET, particularly when it comes to yield, reliability, large netlists and long simulation times. “In addition to the finFET challenges we’ve discussed before, SerDes also have ESD requirements for the high speed signals,” Cole said. “FinFET devices are less robust than planar transistors for ESD. They don’t dissipate heat as well because they are surrounded on three sides by oxide. This poses another difficult design challenge for implementing very high data rates in advanced nodes.”

Today’s finFET processes don’t offer a significant jump in transistor performance. The real improvements are in the available real estate on a chip. But continued feature shrinks do make it harder to design these devices, and new circuit architectures have to be created to cope with these challenges, noted Mahesh Tirupattur, executive vice president at Analog Bits.

“Another big challenge is designing for EM,” Tirupattur said. “The wires are more resistive. As a result, the current-carrying capability of wires is greatly diminished. Supplying power to high-performance designs continues to be a challenge in layout topology.”

SerDes in automotive apps
With so much focus moving to automotive applications, environmental stresses also play a role on SerDes functionality. Different operating conditions may impact devices differently.

“The technology nodes that people are going to be using in a data center, 7nm to 5nm, is not going to be the same technology node they may be using for an automotive application,” Mentor’s Tegethoff said. “In the automotive application, depending on if it’s a safety item, then the developer will do all kinds of additional things to it. But if one is just doing IP, they must make sure to account for all of it because they don’t necessarily control where it’s going. They want to be able to market that to different places.”

With so many additional automotive requirements for reliability and robustness of the design, different design groups will have different protocols they follow as required by either a safety standard or by their customers, he noted. “For example, [circuit] aging will be used on anything that typically goes into a functional safety type application for automotive. Companies like STMicroelectronics or ON Semiconductor would want to make sure that if things degrade, that they degrade gracefully and don’t cause a problem. That said, there isn’t a simple answer. Depending on the application, it’s going to be different.”

SerDes at leading-edge nodes
When it comes to moving SerDes down the technology nodes, much has changed.

“In the case of a SerDes in 7nm, one of the things that hits design engineers is process complexity. The number of layers has increased 5X since 180nm,” Tegethoff said. “Design rules also become a challenge. In terms of analog designs, does the analog circuitry scale as well as the digital circuitry? Can you count on the rules of thumb that you have for digital for analog? Not really. If you look at digital scaling from 180nm to 7nm, it’s about 660 to 1. The analog scaling on the same technology comparison is about 10 to 1, so it doesn’t scale nearly as much.”

Also plaguing SerDes designers at advanced nodes is the interconnect, which impacts signal transmission in the circuits and simulation time. “You have to have the full extracted simulation with all of the capacitors, and resistances,” he said. “And then you must simulate device noise, run multiple corners, make sure it operates across all the corners, run some Monte Carlo, and run this and run that. Designers end up running a lot of simulation, and it needs to be accurate, and that takes time.”

Not all of this can be done by the tools. As with many engineering challenges, the truest optimization for high performance SerDes is a combination of the expertise of the designer working with automation tools.

“A lot rests on the designer, from selecting what to run in the tool and how much to run,” Tegethoff siad. “That’s on the designer. We see people that won’t run as many simulations or won’t do certain analyses that we believe are required, and it comes back to bite them when it doesn’t work in silicon. The designer knowing how to design the circuit is one skill. But how do they know it’s going to operate correctly across all of the different process variations, temperature, interconnect, variability, and so on? In a sense, simulation and verification are like insurance. How much insurance do you buy? Designers are under tremendous pressure to get something out for their customer, but at the same time they have to make decisions on how much to simulate, how much to verify. The best approach is for tool providers to work very, very, very closely with application-specific designers, understand their requirements, and respond to their requirements with the tools they need to get the job done.”


Lex LIU says:


Chen Lee says:

With the increasing complexities and data bandwidth demands on higher speed SerDes, is it generally accepted that a long range SerDes approach is not viable beyond 112Mb/s?

Leave a Reply

(Note: This name will be displayed publicly)