Designing 5G Chips

The next-gen wireless technology is riddled with problems, but that hasn’t slowed the pace of development.

popularity

5G is the wireless technology of the future, and it’s coming fast.

The technology boasts very high-speed data transfer rates, much lower latency than 4G LTE, and the ability to handle significantly higher densities of devices per cell site. In short, it is the best technology for the massive amount of data that will be generated by sensors in cars, IoT devices, and a growing list of next-generation electronics.

Driving this technology is a new radio interface, which will enable mobile network operators to achieve higher efficiencies with similar allocated spectrum. New network hierarchies will facilitate 5G-sliced networks, allowing multiple traffic types to be allocated dynamically according to specific traffic needs.

“It’s about capacity and latency,” said Michael Thompson, RF solutions architect in the Custom IC & PCB Group at Cadence. “How fast can I get lots of data? Another benefit is that it’s a dynamic system, so it allows me to not have an entire channel or multiple channels of bandwidth necessarily tied up. It’s a little bit more like bandwidth-on-demand, depending on what the application is. In this way, it’s more flexible than previous generation standards. Similarly the capacity is much higher.”

This opens up new applications in the home, at sporting events, and within industry and transportation. “If I had enough sensors within an aircraft, I could be monitoring that, and with a machine learning-type of application, it begins to understand when parts or systems or processes need to be serviced or replaced,” Thompson said. “So a plane is flying across country and it’s going to land at La Guardia Airport. Before it gets there, a message comes down that a certain part is showing some signs of wear. So the moment the plane lands, the part is waiting and someone is scheduled to come replace it. This works for things like very large earth-moving equipment and mining equipment, too, in which the system monitors itself. You want to prevent a breakdown in these multi-million dollar pieces of equipment so they won’t be sitting around waiting for a part to be sent out. And there are hundreds of thousands of these items that you’re going to be receiving data from at one time. A lot of bandwidth is needed, with low latency, to get the information quickly. And if you have to turn around and send something back, you can send it back very quickly, as well.”

One technology, multiple implementations
Currently, the term ‘5G’ is being used in multiple different ways. In its most generic form, it is an evolution of cellular wireless technology that will allow new services to be managed over a standards-driven radio interface, explained Colin Alexander, director of wireless marketing for the infrastructure line of business at Arm. “Multiple existing and new spectra will be allocated to carry this traffic from sub-1GHz for longer range, sub-urban and wider coverage, through to mmWave traffic ranging from 26 through to 60GHz for new, high-capacity, low-latency use cases.”

The Next Generation Mobile Networks Alliance (NGMN) and other organizations devised a representation that mapped use cases onto the three points of a triangle—one corner represents enhanced mobile broadband, one represents Ultra Reliable Low Latency Communications (URLLC), and the third Massive Machine Type Communications. Each of these needs a very different type of network to service their needs.

“This leads to the other requirement of 5G — defining the requirements of the core network,” Alexander said. “The core network will allow the ability to scale in order to efficiently carry all of these different traffic types.”

Mobile network operators are trying to ensure they can upgrade and scale their networks as flexibly as possible, utilizing virtualized and containerized software implementations running on commodity compute hardware in the cloud, he noted.

Where URLLC traffic types are concerned, it now may be possible to manage these applications from the cloud. But that requires some of the control and user functions to be moved much closer to the edge of the network, toward the radio interface. Consider smart robots in factories, for example, which for safety and efficiency reasons will require low-latency networks. That will require edge compute boxes—each with compute, storage, acceleration and machine-learning functions—to be pushed out to the cell edge, said Alexander, noting that some but not all V2X and automotive applications services will have similar requirements.

“Where low latency is a requirement, processing again may be pushed to the edge in order to allow V2X decisions to be computed and relayed. If the application is more related to management of resource like parking or manufacturer tracking, then computation could happen on commodity compute equipment in the cloud,” he said.

Designing for 5G
For design engineers tasked with designing for 5G chips, there are many moving pieces in this puzzle, each with its own set of considerations. At the base station, for example, one of the main problems is power consumption.

“Most of the base stations are being designed on leading process nodes for ASICs and FPGAs,” said Geoff Tate, CEO of Flex Logix. “Right now they’re designed with SerDes, which uses a lot of power and takes up area. If you can build the programmability into the ASIC, you eliminate power and area because you don’t need a SerDes running fast outside the chip, and you have more bandwidth between the programmable logic and the ASIC. Intel has been doing this with its Xeons and Altera FPGAs in the same package. You get 100X more bandwidth that way. And one of the interesting things about base stations is you design that technology and then can sell it and use it everywhere in the world. With cell phones, you design different versions for different countries.”

For equipment that is to be deployed in the core network or in the cloud, the requirements are different. One of the key considerations there is an architecture that allows software to be easily managed and use cases to be easily ported onto equipment.

“A standard ecosystem for handling virtualized, containerized services, like for example OPNFV (Open Platform For Network Function Virtualization), is important,” said Arm’s Alexander. “Management of interactions between network elements and flows of traffic between boxes through orchestration of services will also be key. An example of this is ONAP (Open Network Automation Platform). Power consumption and the efficiency of the equipment is also a key design choice.”

At the edge of the network, the requirements include low latency, high user plane throughput, and low power.

“The ability to easily support accelerators for a number of different compute requirements that aren’t necessarily handled most efficiently in general-purpose CPUs will be needed,” Alexander said. “The ability to scale between SoC devices through multiple chips and through chassis-mounted equipment is important. It is also important to support an architecture that can easily scale between ASIC-, ASSP- and FPGA-based designs, since edge compute will be populated across the network in various sizes or equipment. Software scalability is also important.”

5G also could spawn changes to chipset architectures, particularly in terms of where the radio sits. While analog front ends for LTE solutions are placed on the radio, on the processor, or fully integrated, when design teams migrate to a new technology those typically move off chip at first, then back on-chip as the technology matures, said Ron Lowman, strategic marketing manager for IoT at Synopsys.

“With 5G there will be multiple radios, more advanced technology, while faster and more bleeding-edge technology nodes, such as 12nm and beyond, are expected to play a big role for integrated pieces,” Lowman said. “This requires giga-sample-per-second capabilities for the data converters that go into the analog front end. High reliability is always important, too. From a processing standpoint, complexity is much higher than what it has been in the past due to such things as aggregated channels, beam forming, different spectrums licensed by different entities — even open spectrum and the leverage of WiFi. Trying to handle all of that is an intense challenge, where machine learning and artificial intelligence may be well suited to do some of the heavy lifting. This, in turn, impacts architecture because that has strain on not just the processing, but the memory as well.”

Cadence’s Thompson agrees. “As we move forward in developing for 5G or IoT with a higher 802.11 standards or even some ADAS considerations, we’re trying to make it lower power, cheaper, smaller, and improve production by going to smaller nodes. Combine that with the problems you would see in RF,” he said. “As the nodes go smaller, the IC gets smaller. In order for the IC to have the full advantage of getting smaller, it has to go into a smaller package. Everything is driving things smaller and tighter together, but that’s bad for RF design. In analog I’m not worrying so much about distributed effects of my layout. If I have a piece of metal, it may look like a bit of resistance, but it looks like resistance at all frequencies. If it is an RF effect, then that is a transmission line and it looks different, depending on which frequency I’m sending down on it. Those fields will launch in other parts of the circuit. Now I’m making everything closer together, and as that happens, the coupling goes up exponentially. These coupling effects are getting more pronounced as I get to smaller nodes, which also means the biasing voltages are smaller. So noise is a bigger effect, because I’m not biasing a device at the higher voltage. At the smaller voltage, the same amount of noise has a larger effect. Many issues like this come in at the system level for 5G.”

New focus on reliability
Reliability takes on new meaning in wireless as these chips are used in automotive, industrial and medical applications. This isn’t something normally associated with wireless communications, where a glitch in connectivity, degradation in performance, or any other problem that can interrupt service typically has been viewed more as an inconvenience than a safety-related issue.

“We need to find new methods to verify that chips used in functional safety will work reliably,” said Roland Jancke, head of the department for design methodology for Fraunhofer EAS. “We’re not there yet as an industry. We are struggling now to build the development process. We need to look at the interaction of the parts and the tools, and we still have a lot of work to develop consistency.”

Jancke noted that most of the concerns so far have focused on a single design fault. “What happens if there are two or three faults? Verification people need to educate designers about what might go wrong and where the faults are, and then roll that back through the design process.”

That has become a big concern across a number of safety critical markets, and the big problem with using wireless plus automotive is the number of variables continues to increase on both sides. “Some of this needs to be designed to be on all the time,” said Oliver King, CTO at Moortec. “Modeling this ahead of time works where you can predict how things are going to be used. But if you look at a microprocessor, it depends on how the software is used, and that’s hard to predict. It takes time to see how things work.”

It takes a network of villages
Still, enough companies believe there are enough benefits from 5G to warrant the effort of building up the infrastructure required to make all of this work.

The big differentiator for 5G is the data rates that it offers, said Magdy Abadir, vice president of marketing at Helic. “5G can be between 10 and 20 gigabits per second. The infrastructure has to support the type of data rate moving around, and the chips have to process this data coming in. There are also frequency considerations for the receiver and transmitter in bands that are 100GB-plus. RF people have been accustomed to 70GHz for radars and things like that.”

Creating this infrastructure is a daunting feat, and it cuts across multiple segments of the electronics supply chain.

“The magic being talked about to make all of this happen is the effort to to do more integration in terms of the RF in the SoC,” said Abadir. “The people in the trenches building chips that make this reality are talking about integrating all of these RF components with analog component ADCs and DACs, which have very high sample rates. Everything needs to be integrated into the same SoC. We’ve seen integration and we’ve talked about integration challenges, but this amplifies everything because it sets a high target and it squeezes the designer for integration beyond even what had been previously considered. It’s very challenging to keep everything isolated and not affecting the neighboring circuits.”

To put this in perspective, 2G was all about voice, while 3G and 4G were much more about data and more efficient support. 5G, in contrast, represents a proliferation of different devices, different services and an increase in bandwidth.

“There is a 10x increase in the bandwidth that’s going to be required for enhanced mobile broadband along with new usage models like low-latency connectivity,” said Mike Fitton, strategic planning and business development at Achronix. “Also, 5G is expected to become very important for V2X, particularly with next-generation 5G. Release 16 of 5G will have URLLC, which will be important for V2X applications. Another aspect concerns the massive machine-type connections — IoT type applications whereby lots of devices are connected. “

Planning for an uncertain future
5G often is discussed as a series of superlatives, with a 10X increase in bandwidth, 5X reduction in latency, and 5X to 10X increase in the number of devices. This is made all the more difficult by the fact that the ink isn’t quite dry on the 5G spec. There are always late additions, which require flexibility and that translates to programmability.

“If you look at these two big requirements of needing a hardware data pipeline because of the high throughput, and needing flexibility, it means you probably need some kind of dedicated SoC or ASIC, which has lots of programmability across the hardware and the software. If you look at every 5G platform today, they are all based on FPGAs today because you just don’t see the throughput. At some point, it is incredibly likely that all of the big wireless OEMs will move towards ASICs that get more optimized cost and power but there is that need for flexibility coupled with the drive toward lower cost and power. It’s about keeping flexibility where you need it (in FPGA or embedded FPGA) and then where possible, hardening the functions to get the lowest cost and power.”

Flex Logix’s Tate agrees. “There are 100-plus companies working in this area. Spectrums are different, protocols are different, and the chips being used are different. There are repeater chips that will be more constrained in power on building walls, which may be a place where eFPGAs have even higher value.”

—Ed Sperling contributed to this report.

Related Stories
The Bumpy Road To 5G
How pervasive will this new wireless technology actually become, and what problems still need to be solved?
Wireless Test Faces New Challenges
The advent of 5G and other emerging wireless technologies make test more difficult. Over-the-air testing is one possible solution.
Tech Talk: 5G
What this new wireless standard means for the technology industry and what issues still need to be solved.
5G Test Equipment Race Begins
Next-gen wireless communications technology is still under development, but instrument suppliers are ready to test 5G in trial deployments.



Leave a Reply