Revising 5G RF Calibration Procedures For RF IC Production Testing

The non-ideal nature of items in the paths between the instruments and the device under test can degrade measurement accuracy.

popularity

Modern radio frequency (RF) components introduce many challenges to outsourced semiconductor assembly and test (OSAT) suppliers whose objective is to ensure products are assembled and tested to meet the product test specifications. The growing advancement and demand for RF products for cellphones, navigational instruments, global positioning systems, Wi-Fi, receiver/transmitter (Rx/Tx) components and more keeps growing and driving the demand for more advanced 5G cellphone and Wi-Fi components.

In any RF test system, the ability to achieve instrument-port accuracy at the device under test (DUT) enhances measurement accuracy and repeatability. Unfortunately, the non-ideal nature of the cables, components, traces and switches and other items in the paths between the instruments and the DUT can degrade measurement accuracy.

While current calibration methodologies may have worked in the past, the mmWave advancement in RF technologies demands calibration procedures be revised for the specification extensions. It is important to consider key major points of signal path calibration, namely: system calibration, cable calibration, load board trace de-embedding and golden units’ calibration as well as how to use these aspects as a unique advantage in developing calibration standards.

mmWave RF test

Fig. 1: Test system configuration for mmWave RF testing. (Source: NI mmWave Transceiver Block Diagram)

Proper selection of test equipment, connectors, adapters and system-level calibration enables accurate measurements to evaluate the true performance of 5G components or devices, see figure 1. At mmWave frequencies, signals are more susceptible to impairments, requiring extra consideration in the selection of test solutions, cables and connectors. System-level calibration is also essential to achieve accurate measurements.

RF calibration

Calibration ensures the measurement system produces accurate results. All non-idealities in the paths between the test system instruments and the device under test can degrade the measurement accuracy or result in flatness errors. One must extend the measurement accuracy from the test system signal source’s output or its measure input to the DUT’s test port as shown in figure 2. Measurement of the frequency response of the test fixture, cables and connectors may be required to obtain an accurate, calibrated measurement.

5G promises substantial improvements in wireless communications, including higher throughput, lower latency, improved reliability and better efficiency. Achieving these goals requires a variety of new technologies and techniques, like higher frequencies, wider bandwidths, new modulation schemes, massive multiple input, multiple output (MIMO), phased-array antennas and more.

These technologies bring new challenges in the validation of device performance. One of the key measurements is error vector magnitude (EVM), which is a system-level specification for modulated signals as they are delivered to and received from the DUT. In many cases, the EVM value must remain below a specific threshold—and getting an accurate measurement requires that the test system itself have a low noise floor (i.e., a low EVM).

Fig. 2: RF calibration setup.

The two types of calibration methods used to account for and correct these measurement errors are vector and scalar calibration.

Vector calibration

The vector calibration method requires measurements of both the magnitude and phase characteristics of the RF path. This can be done by either performing a network analyzer calibration at the DUT’s input and output ports or by using a calibrated network analyzer to measure the scattering (S) parameters3 of the RF path. The latter method provides a complete, complex-valued characterization of the signal path.

Scalar calibration

The scalar calibration approach characterizes only the magnitude characteristic of the RF path, which is equivalent to measuring only the magnitude portion of the S21 transmission coefficient in a vector calibration. A common technique involves driving one end of the path with a signal generator and measuring the signal at the other end with a power meter. The magnitude portion of the path response (loss) is determined by subtracting the source power level (in dBm) from the measured power level (also in dBm). This is repeated at multiple frequencies across the band of interest to determine the overall magnitude characteristic.

Scalar calibrations can achieve acceptable results when high-quality components, adapters and cables are used in the system. This helps minimize measurement uncertainty and increase measurement repeatability. However, when compared to a full vector calibration, scalar calibration is less likely to detect changes in impedance match along a signal path.

Figure 3 shows the procedure for performing a scalar RF calibration. The calibration test program is executed and designates the DUT board and tester to perform the path calibration. Then, calibration data is collected by using external equipment (power meter, signal generator, spectrum analyzer). The collected calibration data is compared with the most recent calibration data and the calibration data is applied – if the deviation is within the specified error range. If the deviation is out of the error range, inspection and repair are performed and the calibration procedure is repeated.

Fig. 3: Scalar RF calibration procedure.

DUT socket calibration

RF calibration is performed in two steps, calibration of the DUT Rx signal path and calibration of the DUT Tx signal path. To do this, the design and manufacture of a calibration kit is proposed.

In Step 1, the calibration loss factor measurement environment is derived by the calibration of the DUT Rx Signal Path. The calibration kit connected to the pogo RF signal pin should be perfectly matched with the signal trace. To measure the power level of the input signal accurately with a power sensor, a power meter is used to null out the power sensor and then the signal generator and the power sensor should be applied to accurately measure the loss factor for the input signal trace to the DUT. These loss measurements are frequency dependent and must be made at each production test frequency. All calibration loss factors are stored in a file to be applied during production testing.

In Step 2, the calibration loss factor measurement environment is configured to perform the calibration of the DUT Tx Signal Path. Loss factor for all input and output pogo RF signal pins can be measured by using a signal generator and a spectrum analyzer so that the RF signal trace between the input pogo RF signal pin and the output pogo RF signal pin can be accurately calibrated. In addition, it is possible to calculate only the loss factor for the output pogo RF signal pin by substituting the loss factor of the input pogo RF signal pin previously measured in Step 1. Through this process, the loss factor can be calibrated out for the round-trip RF signal path from the load board over each production test frequency.

To fully calibrate the entire trace up to the DUT, the loss factor for the pogo pins connecting the printed circuit board (PCB) and the DUT must be considered. At relatively low frequencies, the loss factor for a pogo pin may be negligible and may be excluded from the RF calibration. However, as 5G New Radio (NR) uses the mmWave frequency band, a considerable amount of RF signal loss may occur even in the same type of pogo pins. The pogo pin’s loss contribution is included as a loss element to calibrate. Since it is difficult to accurately measure the pogo pins losses in the existing socket structure, a new calibration and test socket feature was developed to accurately measure these losses.

There are two basic techniques used to correct for the systematic error terms. These are short, open, load, through (SOLT) and through, reflect, line (TRL). The differences in the calibrations are related to the types of calibration standards used and how the standards are defined. They each have their advantages, depending on frequency range and application.

A Case Study12

At 5G carrier frequencies and bandwidths, the test fixture can impose a significant channel frequency response on the test system and adversely affect EVM results. The measurement includes the characteristics of the test fixture and the DUT – making it difficult, if not impossible, to determine the true performance of the DUT. Calibration can move the test plane from the test system connector to the input connector of the DUT (see figure 4).

This uncalibrated test system has unknown signal quality at the input to the DUT (S1’). A common mistake is to simply use equalization in the measure side (M1’), but this occurs after the DUT and it also removes some of the imperfect device performance to be characterized.

In this calibrated test system, the system and fixturing responses have been removed, enabling a known-quality signal to be incident on the DUT (S1). The measurement errors can also be removed (M1).

Fig. 4: Test plane movement to the DUT through calibration.

Figure 5 shows the results of analyzing the effect of calibration on the RF modulation test for orthogonal frequency-division multiplexing (OFDM). Frequency response characteristics were compared for a 900 MHz bandwidth (BW) signal at 28 GHz. The upper trace shows the amplitude response with a significant roll off at the upper end of the bandwidth. The lower trace shows the phase response, which also has considerable variation over the BW.

When the calibration is not applied, a large deviation occurred in the magnitude (7 dB) and phase (45 degree) of the frequency response characteristics. When the calibration was applied, the variation in the magnitude (0.2 dB) and phase (2 degree) of the frequency response characteristics was obtained.

Before system calibration: these OFDM frequency response corrections for an uncalibrated system show variations of nearly 7 dB in raw amplitude and 45 degrees of phase shift across a 900 MHz bandwidth at 28 GHz.

After system calibration: the same OFDM response for a calibrated system, showing variations of only 0.2 dB and 2 degrees.

Fig. 5: OFDM modulation frequency response before and after calibration.

Figure 6 shows the demodulation results after calibration for single-carrier 16 quadrature amplitude modulation (QAM) signal. The upper-left trace shows a very clean constellation diagram. This implies that the equalizer response in both magnitude and phase is flat and within specification, indicating that the equalizer is not compensating for any residual channel response in the test fixture. The lower-left trace shows the spectrum with a bandwidth that is nearly 1 GHz wide. The middle lower trace shows the error summary: EVM is approximately 0.7 percent, which is acceptable margin when compared with the device specifications. This system would be ideal for determining a device’s characteristics.

Fig. 6: OFDM frequency response before and after calibration. Calibration enabled the signal generation of a 1 GHz wide signal with an EVM of less than 0.7 percent at 28 GHz. This EVM occurs at the input plane of the DUT.

Summary

RF calibration is a required process for production testing of semiconductor RFICs that translates into acceptable throughput, margin and yield. Reliable and repeatable calibration ensures consistent results that make it easier to pinpoint product or design problems and thereby minimize delays in development and manufacturing. RF calibration at Amkor is a vital setup for successful production testing of customer parts.

References

  1. “Using Calibration to Optimize Performance in Crucial Measurements,” (Keysight, 5992-0891).
  2. “Accelerate 5G Testing: 5G Manufacturing Test Considerations,” (Keysight, 5992-3659).
  3. “S-Parameters,” Microwaves101.com
  4. “4 Hints for Better Millimeter-wave Signal Analysis,” White Paper, (Keysight, 5992-2970).
  5. “A Novel BiST and Calibration Technique for CMOS Down-Converters,” 2008 4th IEEE International Conference on Circuits and Systems for Communications (ICCSC 2008).
  6. “Self-calibration of input-match in RF front-end circuitry,” IEEE Transactions, Volume: 52, Issue: 12, Dec. 2005.
  7. “Calibration techniques of active BiCMOS mixers,” IEEE Journal of Solid-State Circuits, Volume: 37, Issue: 6, June 2002.
  8. “Digital calibration of gain and linearity in a CMOS RF mixer,” 2008 IEEE International Symposium on Circuits and Systems (ISCAS).
  9. “Verification of wafer-level calibration accuracy at high temperatures,” 2008 71st ARFTG Microwave Measurement Conference.
  10. “A multiline method of network analyzer calibration,” IEEE Transactions on Microwave Theory and Techniques, Volume: 39, Issue: 7, Jul 1991.
  11. “On-wafer calibration techniques for giga-hertz CMOS measurements,” Proceedings of 1999 International Conference on Microelectronic Test Structures.
  12. “Comparison of the “pad-open-short” and “open-short- load” deembedding techniques for accurate on-wafer RF characterization of high-quality passives,” IEEE Transactions, Volume: 53, Issue: 2, Feb. 2005.
  13. “Should I be worried about 5G calibration?” Keysight Community blog.

Additional authors

HyeongSik Youn, Director, Test Engineering; JeongYon Kim, Director, Test Development Department Manager; SangHo Byun, Senior Director, Master Test Engineering; MinHo Chang, Vice President, Test Development Division Manager  l  Amkor Technology Korea

Venancio Kalaw, Radio Frequency & MEMS, Test Development Engineering; Mon Lopez, Director, Development Department Manager  l  Amkor Technology Philippines



Leave a Reply


(Note: This name will be displayed publicly)