Reducing SerDes latency variation and jitter is necessary for long-reach networking applications.
5G is the 5th generation wireless system standard that, through high speeds and increased accessibility, promises to change the way we stream, communicate, work, and travel. Boasting speed capabilities of 20Gbps and network densities of 1 million connected devices per square kilometer, 5G is the required technology for the implementation of highly anticipated technologies like autonomous vehicles, smart cities, and more. All major wireless carriers (Verizon, T-Mobile, and AT&T) have announced their intentions to begin 5G rollout in the US by the end of this year. Although this may appear to be a “simple” incremental generation advancement, 5G actually requires significant investments and potential changes in infrastructure compared to previous generations. These changes for 5G, as any other infrastructure that relies on high-speed transfer of data, processing and re-distribution of processed data, rely heavily on ultra-high speed and low latency serial data communication.
5G will utilize current and new architectures
Current 3G and 4G network fronthaul infrastructure is comprised of two main parts: the remote radio head (RRH) at the top of the tower and the baseband unit (BBU) at the bottom of the tower. The RRH is connected to the BBU using fiber optic cables and they communicate via the common public radio interface (CPRI) protocol. This standard was first introduced to the wireless space in 3G deployment to increase speed beyond the traditional co-axial cables that were used for this connection in the past.
Figure 1: Typical wireless fronthaul infrastructure – Source: Electronic Specifier
Although the above infrastructure will continue to exist, 5G will also see an increased deployment of cloud or centralized radio access network (C-RAN) based architecture. C-RAN is a centralized, cloud-computing based architecture for radio access networks that will not only support 5G, but will also likely provide backwards compatibility with 2G, 3G, 4G. The idea will be to aggregate radios and base stations into centralized networks to provide coverage over a continuous area. Communication between the grouped RRHs and BBUs will still utilize fiber optic channels in the fronthaul to enable 100Gbps to 400Gbps Ethernet protocols. This C-RAN infrastructure will enable higher density, more throughput and increased bandwidth for 5G.
The RRH in both fronthaul network designs contain the RF transceiver, data converter circuitry, and digital signal processors. Implementation of the RRH involves the conversion of an analog signal into digital and vice versa, and the serial data communication interface between data converter units are often governed by the JESD204B/C standard.
On 5G systems, this style of tower will continue to exist, but the internal workings require much higher speed capabilities than existing 4G implementation. To support these data rates, upgrades to the existing communications interfaces of both CPRI and JESD are being proposed. The new revision of CPRI is proposed to increase from 12.1Gbps to 24.33Gbps and the new revision of JESD204 standard increases the speed from JESD204B @ 12.5Gbps to JESD204C @ 32.5Gbps.
Figure 2: CRAN infrastructure example – Source: Electronic Specifier
CPRI interface considerations
The latest and greatest CPRI electrical interface requires support for a maximum data rate of 24.33Gbps. Support for higher data rate inadvertently requires support for handling more insertion loss. Higher insertion loss is not the only challenge. In a CPRI fronthaul, for example, data streams are nominally formed by I and Q (in-phase and quadrature) streams. Any change in the relative timing relationship between these two streams of data will result in recovery (or transmission) of the analog signals. The timing relationship can be impacted by the jitter in the SerDes clocking (transmit as well as recovered) all the way to any variation in the signal latency of the I and Q data streams. A high-speed SerDes design for CPRI application should have a very low TX clock jitter, a very low recovered clock jitter, low latency, and also an ultra-low latency variation. Special consideration and techniques are required to reduce the latency variation in SerDes in order to create an ideal interface for long reach networking applications.
JESD204 interface considerations
JESD204 electrical interface has also been updated from JESD204C from JESD204B to support maximum data rate up to 32.5Gbps. In additional to maintaining low jitter and latency values, one of the key challenges of digital converter interfaces is that the upstream versus downstream data rates can be asymmetric. In other words, Transmit serial data rate can be different than receiver data rates. In many SerDes architecture implementations, supporting RX and TX asymmetric data rates results in significant overhead in power and area consumption. An advanced clocking mechanism in addition to a careful optimization of power and area has to be deployed to reduce the overhead of asymmetric operation. These considerations, combined with a significant increase in channel loss when moving from lower data rates to higher data rates, require a SerDes with advanced equalization and adaptation scheme to cover such a broad spec.
Conclusion
In conclusion, as the push to 5G brings more complexity to fronthaul network infrastructure, with increased bandwidth and data processing, ASIC interfaces must keep up. In addition to the CPRI and JESD204 standards, the microchips in 5G infrastructures often communicate to a variety of other devices using Ethernet, PCIe and a few other protocols requiring serial data communication interfaces supporting those standards. While there is a set of common requirements, each of the above interfaces pose specific challenges of their own and high-speed SerDes design will require many design considerations and possible architecture changes to address the more complex channels and higher data rates.
Leave a Reply