Accelerating 5G Baseband With Adaptive SoCs: Part II

Implementation of the fronthaul and L1 Hi-PHY for 5G base stations.

popularity

In my previous blog, we discussed 5G split architectures with focus on the widely adopted option 7-2 split. In this article, we will cover the implementation of the fronthaul and L1 Hi-PHY for 5G base stations. The 5G distributed unit (DU) can be implemented to process fronthaul data with O-RAN processing and partial offload for Hi-PHY processing which includes the LDPC encoder, LDPC decoder and wrapper functions for encoder and decoder logic.

Fronthaul processing: The below example architecture assumes two network interfaces connected to 5G radio units (RUs) as shown in figure 1. The 5G DU must be capable of full capacity of network connectivity data transfer between 5G and 5G base stations. Network interface blocks include Ethernet MAC interfaces connected to industry standard interface optical modules to transmit and receive the Enhanced Common Public Radio Interface (eCPRI), Radio-over-Ethernet (RoE) or Time Sensitive Network (TSN) Ethernet data from the 5G RUs.  The host interface is usually PCIe including a high-speed data transfer mechanism using the direct memory access (DMA).

Fronthaul processing can be divided into the following major subblocks and we’ll dive further into each block below.


Fig. 1: Fronthaul processing on 5G base station node.

1. Precision Time Protocol (PTP) Functionality: This synchronizes the local clock (acting as slave node clock) with a system grand master clock by using traffic time-stamping with sub-nanosecond granularity. The DU receives the 1588v2 PTP packets as part of the traffic and identifies them as synchronization plane packets. It then sends them to the S-plane application running on x86 after replacing the time-stamp field with the time-stamp field generated by the reference clock. The other functionalities of this block may include processing of delay request, update of the master clock timer value for time-of-day from software and generation of 1PPS (pulse per second) in master mode.

2. Traffic Classifier/Aggregator: The functionality of this block is to route the control, user, synchronization and management (C, U, S and M-plane) messages. The traffic classifier block can implement traffic rules which is used to drop or process the incoming fronthaul traffic from incoming network ports. This block can receive eCPRI packets (C and U plane) and Ethernet packets (S and M plane) in both uplink and downlink directions.

For uplink processing, the eCPRI packets are identified by the eCPRI message type field in the packet header. This includes checking the source MAC address, destination MAC address and Virtual Local Area Network (VLAN) ID against the configured rules and dropping the packet if the rules do not match. For S and M plane Ethernet packets in uplink direction, it can implement a simple arbiter to schedule and transmit them to host interface queues.

For downlink, it configures the priority of the different eCPRI messages, based on message type field in eCPRI header. It can also add the VLAN tag based on C and U-plane configuration, priority field in VLAN tag can be used to assign the priority for C/U-plane messages. S and M-plane can also be VLAN tagged and assigned the priority. This block can also implement priority scheduler to send packets to one of the connected fronthaul ports based on assigned priority.

3. eCPRI Framer and De-Framer: eCPRI framer/de-framer processing is responsible for eCPRI protocol processing for up-link and down-link C/U plane messages. The eCPRI processing needs to include separate uplink and downlink data path processing. Since the eCPRI processing has to support multiple antenna-carrier (AxC) configurations in a base station, the adaptability of this block allows to scale up and down based on deployment scenarios. The packet format for eCPRI-over-Ethernet messages is shown in figure 2. The padding (zero padding) field is added to make the eCPRI maximum transmission unit (MTU) size to 64B for short messages.


Fig. 2: eCPRI over Ethernet message in an Ethernet packet.

eCPRI framer processes both uplink and downlink C-plane messages and downlink U-plane messages, since C-plane messages for downlink are also generated at the 5G DU. Multiple streams/layers of eCPRI messages can be shared by a single eCPRI framer data path by using hierarchical scheduler and multiplexing scheme. eCPRI framer generates the different field of eCPRI messages and does the padding to create eCPRI over ethernet packets to be transmitted over the fronthaul interface.

eCPRI de-framer block has the following functionality:

  • Processing and removal of Ethernet header
  • Parsing and removal of eCPRI header
  • Removal of eCPRI padding which includes the stream identification and sequence numbers based on header fields.
  • Removal of zero padding in eCPRI data (for short messages)
  • Checking of length and other protocol errors
  • Statistics for each eCPRI stream

4. O-RAN Processor: The O-RAN block works in conjunction with eCPRI block and usually interfaces with the host to provide the following functions:

  • Receive the uplink U-Plane messages from e-CPRI de-framer to extract the IQ data and deliver it to host
  • Extracts packing information for C-plane IQ data and using it accordingly for uplink U-plane messages.
  • Delay management and forwarding C-plane messages to the eCPRI block
  • Framing of the U-plane IQ data from host to O-RAN message and delivery to eCPRI framer

The O-RAN module interfaces are shown in figure 3.


Fig. 3: O-RAN block interfaces for uplink and downlink data.

Both O-RAN uplink and downlink modules are designed to interface with four independent AxC interfaces. In the uplink direction O-RAN block classifies U-plane messages into Physical Random Access Channel (PRACH) or Physical Uplink Shared Channel (PUSCH) based on a parameter in O-RAN header. These messages are then de-framed to extract the corresponding IQ (Data format used for radio signals) samples. In the downlink block the C-plane messages are parsed to extract the information needed for U-plane framing.

5. IQ Data host interface: The host interface block sends and receives the IQ data samples to and from the CPU, handling the delay management for U-plane and C-plane messages. For buffering of IQ samples, external memory can be used to ensure the loss-less packet transmission to fronthaul interface. Host interface blocks read the data stored in memory along with timing ticks generated on an adaptable System-on-Chip (ASOC) to ensure the slot synchronization between ASOC and host CPU.

As described above, the fronthaul processing and L1 Hi-PHY acceleration need to be adaptable to handle the various Massive multiple Input, multiple Output (mMIMO) antenna configuration for fronthaul connectivity and throughput. The data path processing should be able to provide line rate interface with eCPRI and O-RAN processing, while meeting the latency and synchronization requirements of 5G specifications.

Xilinx has implemented the fronthaul reference design in its T1 Telco Accelerator Card to handle the total throughput of 50Gbps which approximately equals the 8 layers of 4T4R 100MHz in active standby configuration. The card uses adaptable MPSoC and RFSoC devices to keep the functionality flexible. In most of the DU implementations, the x86 software implements the complete wireless L1 stack, using the O-RAN processor on adaptable devices and can provide significant throughput and latency advantages.

I look forward to sharing more in my next article, which will focus on the partial offload of L1 Hi-PHY functionality and the advantages of using programmable devices for flexibility, throughput and latency.

Related
Accelerating 5G Baseband With Adaptive SoCs
Part 1: New chip architectures for next-gen communication.



Leave a Reply


(Note: This name will be displayed publicly)