The 5G mmWave Commercialization Effort Is Underway

Three things will need to happen to make mmWave both profitable and viable.

popularity

By David Vondran and Rodrigo Carrillo-Ramirez

5G broadband cellular technology entered its first major rollout phase in 2019. In recent years, 5G adoption has been very visible among the consumer electronics industry, with 5G capabilities now being a key selling point for mid-tier to high-end mobile devices.

Behind the scenes, however, there have been a number of developments designed to lay the groundwork for even more ambitious capabilities. Most recently, in March 2020, the 37, 39 and 47 GHz bands were auctioned off in the US to allow carriers to tap into 5G’s millimeter wave (mmWave) spectrum to achieve additional capacity and higher throughput. These bands are known as Frequency Range 2, or FR2, as they supplement 5G’s more conventional cellular spectrum (FR1). This action became a major driver of subsequent commercialization efforts.

When this and all the previous industry activity are taken into account, the stage is set for the following 5G mmWave band assignments (N257–N262):

Although many of the most important regulatory pieces are now in place to realize the potential of 5G, the commercialization of mmWave technology is anything but easy and will take time before it expands beyond urban centers. For the US market, which is dominated by three carriers (Verizon, AT&T and T-Mobile), the mmWave frequencies from 26–47 GHz pose a particular challenge to the semiconductor ecosystem—a challenge that, as we’ll see below, is likewise related to frequency coverage. And the lesson this holds for the US market will also apply globally.

The short story is that three things will need to happen to make mmWave both profitable and viable.

  • First, the global supply chain needs to expand to support both the new infrastructure and the consumer-facing equipment. Only then will the industry see the initial return on investment (ROI) from 5G.
  • Next, the R&D building blocks have to be commercialized in order to be considered truly production-ready from a mass-manufacturing standpoint.
  • Finally, the industry as a whole must be able to identify and substantiate the macroeconomic effects—namely, sustainable profits—that motivate the entire commercialization process.

That sounds like a tall order. One key catalyst for this three-part chain of events is the knowledge that FR2 can be integrated and tested in a cost-effective way. This knowledge will make the technology more viable for device manufacturers who want to maintain their existing market lead or make their products more competitive in the near future. Those who do so stand to put themselves at the forefront of 5G mmWave adoption and reap the pioneer advantage.

The production hurdles facing mmWave devices

Before we delve into solutions, let’s first look at some of the top production roadblocks to mmWave devices. These help to understand why some solutions are better suited to FR2 frequencies than others.

Anticipation versus reality: When technology pioneers imagine the consumer and industrial applications that 5G FR2 will empower, there’s naturally a lot of excitement over mmWave. Its faster throughput lends itself to everything from data-hungry video streaming in the airport lounge to game-changing augmented reality experiences at the sports stadium.

Then the engineers and accountants start to weigh in on the technical hurdles and the costs involved, and the excitement starts to fade. This can lead some manufacturers to rule out the mmWave market before they’ve even started.

Sticker shock: On the subject of profitability, it’s common knowledge that higher frequencies translate to higher costs. This means that if hardware with RF capabilities under 6 GHz is expensive relative to digital then it is expected that hardware with mmWave capabilities will be more expensive than RF. In fact, manufacturers expend a great deal of effort just to minimize the number of external RF instruments on production lines because of the dramatic effects it has on overall cost.

As a general rule of thumb, approximately 2–4% of a chip’s average selling price is typically budgeted for packaging and test. This leads device manufacturers to assume that this budget will be insufficient because testing equipment is often much more expensive for the mmWave spectrum than for the RF spectrum. For example, RF benchtop instruments for testing radio links usually retail at more than $10k per 10 GHz of coverage.

Device complexity: The higher frequencies of FR2 are concomitant with greater device complexity. More expertise is needed to design and develop the hardware, and new instrumentation is therefore required to test the related advanced packaging (e.g., antenna-in-package (AiP)) devices that employ beamforming and other intricate operations, which enhance the 5G mmWave standard. Some of these aspects are described in more detail in our previous post, “The Future of Wireless Test Is Over the Air.”

Additionally, the same testing regimen cannot be applied to all devices. Some devices have additional frequency conversion steps that require complex calibration. The new over-the-air (OTA) test methods noted above also highlight the increasing complexity of this process and the need to ensure the devices are fully functional before shipment.

Streamlining production: The supply chain is accustomed to having separate insertions and separate instruments for bands. One example of this might be a discrete insertion that’s optimized for cellular and another that’s optimized for connectivity. By consolidating, standardizing and streamlining these fundamental production steps, manufacturers can achieve powerful economies of scale.

But given the challenges associated with mmWave bands, it’s not always clear how these same economies of scale can be realized. For instance, does each FR2 band need its own insertion, or is it possible to find a universal approach that can handle the entire spectrum? This will largely boil down to the test strategy that manufacturers choose.

Signal delivery: As if those issues weren’t enough, mmWave presents one further headache. The standard RF cable—and by extension the standard signal delivery method—simply isn’t designed for these mmWave frequencies. The limitation has to do with something incredibly basic: the connector interface.

The common RF connector found in production environments only supports frequencies up to 18 GHz. By contrast, signal delivery enhancements are necessary as coaxial connectors are much easier to work with than waveguides for optimal 5G FR2 coverage. Without enhancements, manufacturers risk sacrificing performance, repeatability and affordability when it comes to device testing at scale and at speed.

While this isn’t an exhaustive list, it does highlight some of the most pressing concerns. And getting all these high-level variables under control is necessary for device manufacturers to achieve technical and economic success as they capitalize on the growing number of mmWave use cases. Our post “The Great Migration to 5G Is Underway” has more information on this industry trend and its implications for manufacturers who want to leverage its potential.

Now let’s look at how to do just that.

Developing mmWave test strategy is an iterative process

From a historical perspective, developing a new mmWave test strategy has some similarities with RF commercialization efforts. As with RF, an mmWave test strategy involves a multi-pronged, multi-discipline approach, albeit one that has a much more agile mindset given the newness of mmWave.

First off, as we’ve seen above in the section entitled ‘Sticker shock,’ the instrumentation cost for FR2 testing tends to grow exponentially as the frequency increases. The initial test strategy therefore involves focusing on how to reduce the amount of mmWave testing needed. This is achieved using two main techniques:

Built-in self-test (BIST). As its name suggests, BIST consists of incorporating components into the integrated circuit (IC) that will allow it to internally test sections of itself for basic functionality and performance. Those sections then become part of broader loopback mechanisms.

BIST might be less expensive, but the results are not obtained from a representative use case and therefore the data will not be tightly distributed and often lacks enough accuracy. The chief advantage of BIST is that it reduces the duration of the test cycle, as well as the complexity of the test–probe setup due to its integrated nature.

External loopback. This technique is common in digital circuitry and sees the unprocessed signal looped back to the circuit itself. It’s attractive for RF ICs that integrate the transmitter (TX) and receiver (RX) sections in the same design. In this technique, the TX generates a signal that then is externally looped back to the RX where characterization can occur using available resources. This low-cost approach bypasses the need for external RF instrumentation. It also offers a way to overcome internal BIST constraints because external loopback is somewhat more deterministic.

However, the results of external loopback lack any independent mechanism to verify absolute performance (e.g., output power) or to isolate individual performance of the circuit blocks. This lack of parametric insight into absolute performance means that external loopback can only provide a relative view of device-to-device repeatability. The metrics call for independent confirmation, especially for any applications intended for a licensed band given that nonconformance can lead to more than just poor customer experience. It can also result in steep fines from regulatory agencies.

Should these initial techniques fall short of production goals, it could create a need to fall back on conventional best practices that utilize RF instrumentation—even if it’s done as a last resort and the associated costs are relatively high. Here the challenge is for the test engineers to fully utilize the lowest-cost instruments possible and apply proven strategies for success. Meanwhile, the organization can revisit the budget to find the right balance under the circumstances.

Although it’s not ideal, this combination of cost-optimized techniques and best practices helps to ensure a path to production. The other advantage is that it creates a space for a hybrid testing approach to emerge.

Yet, with so many unknowns, what is the justification for early 5G mmWave commercial adoption? The underlying rationale is twofold.

First, the chief advantage lies in having cultivated a mature mmWave large-scale production and testing process by the time other players straggle into the market.

Second, an optimized test strategy is what emerges through the continued data-gathering during each phase of that strategy’s development. That is to say, the best test strategy is an iterative process whereby thoughtful evolution—guided, of course, by ongoing data analytics—forges a path to successful commercialization. Importantly, success is defined here in terms of both technical and economic factors.

Generally speaking, if we apply this iterative approach to the end goal of mmWave commercialization, the ideal solution is to have a single testing instrument that covers the entire 5G FR2 radio with support for bands N257–N262 (i.e., 24.25–48.20 GHz). Discrete instruments for different bands would create inefficiencies throughout the ecosystem.

This integrated solution would also need to support both high-performance continuous waveform (CW) (e.g., gain) and modulated measurements (e.g., error vector magnitude). And, in keeping with the dual emphasis on technology and economics, the same solution would have to allow for massively parallel testing by using site density, such as quad or octal sites, to improve economics.

A final consideration for this consolidated test instrument would be to support the semiconductor content standard workflow: wafer sort (WS), functional test (FT) and over-the-air (OTA) test. This would allow for the capture—and eventual optimization—of data analytics throughout each of the test insertions while still realizing the benefits from economy of scale.

Solution for mmWave testing

If all this sounds very aspirational and theoretical, it isn’t. Teradyne’s new UltraWaveMX53 test instrument is able to address the 5G FR2 frequencies with support for the complete licensed range from 23.8 to 52.6 GHz.

Not only does the UltraWaveMX53 provide performance over this entire spectrum, but it also features fully integrated temperature stabilization and calibration features. The integrated synthesizer per channel architecture provides best-in-class phase noise performance. This in turn enables source and measurement capabilities of modulated waveforms with outstanding error vector magnitude (EVM) performance.

Furthermore, the UltraWaveMX53 is versatile enough to accommodate almost any production environment. It’s available in several configurations to support all common semiconductor insertions (e.g., WS, FT, OTA). At the same time, every configuration provides 16 ports with two fully independent channels that cover up to the 52.6 GHz band.

Of course, this addresses the pressing technical issues cited above. But what about economics?

Our UltraFLEX platform has that covered too. Because this turnkey instrument is capable of quad site testing, it can accommodate four (4x) DUTs per site to accelerate test times without sacrificing rigor. This parallel testing feature is an important enabler of moving production to commercial scale.

Conclusions

It’s no exaggeration to say that 5G heralds a new era in wireless communication, especially with the deployment of mmWave technology. This unique 5G standard opened up new applications that will fuel both consumer and enterprise demand.  And our new product, UltraWave MX53, complements the rest of our 5G product portfolio, the UltraWaveMX8, MX20e, and MX44e.

Yet mmWave will also have profound effects on the semiconductor ecosystem, not least because it poses new challenges like the need for better testing and calibration. Device manufacturers have to be aware of these challenges and devise a test strategy that’s able to hit technological as well as economic targets. An iterative approach to developing this test strategy is wise at this early stage but one thing is certain: central to any strategy is a partner you can trust and a turnkey solution that delivers cost of test alongside state-of-the-art performance.

From our equipment to our expertise, Teradyne is an integral part of the test strategies of global suppliers who are actively working toward mmWave technology commercialization. As a result, we’re also evolving. Our contribution to the mmWave commercialization process is adapting to every shift in the semiconductor ecosystem and our support services, including applications assistancedevice interface solutionsengineering services and software services to ensure success throughout the commercialization process.

Rodrigo Carrillo-Ramirez has over 20 years of experience designing millimeter wave solutions, first at Analog Devices and now at Teradyne as the millimeter wave engineering manager. He holds Ph. D. and M. S. degrees in electrical and computer engineering from University of Massachusetts, Amherst and a B.S. in electrical and mechanical engineering from the National University of Mexico.



Leave a Reply


(Note: This name will be displayed publicly)