Experts at the Table, part 2: Getting 5G standards and technology ready are only part of the problem. Reducing latency and developing applications to utilize 5G have a long way to go.
Semiconductor Engineering sat down to talk about challenges and progress in 5G with Yorgos Koutsoyannopoulos, president and CEO of Helic; Mike Fitton, senior director of strategic planning and business development at Achronix; Sarah Yost, senior product marketing manager at National Instruments; and Arvind Vel, director of product management at ANSYS. What follows are excerpts of that conversation. To view part one, click here.
SE: Can all of the complexity around 5G really be simulated and modeled? Is that possible?
Vel: Simulation of antennas is a known problem. You’re basically following Maxwell’s equations and you’re trying to understand behavior when current is flowing through a conductor. As long as we’re realistically solving that in the most efficient way, you can solve any electromagnetic induction problem. The only bottleneck is compute time. If you have a 64 x 64 combination, you have millions of possible combinations. You can use software to determine which are the most important, but not try to solve every single combination.
Yost: I don’t think simulation is the right answer. Software-defined radio is all about prototyping. In order to figure this out, you build one and prototype it. By combining software-defined radios, so you have a flexible radio and an FPGA that can handle the real-time processing, you no longer need an ASIC with the full network built in order to start testing this. We have a setup in our lab with a 64-element array and we can start looking at that and testing different algorithms. When you move, what’s the best algorithm for doing beam-tracking? Prototyping adds a whole other layer to 5G that we didn’t have in previous generations. We’re able to see how things perform before you have to bake it into the IP or bake it into a semiconductor. It gives you the chance to make a much more complicated network, with less risk and re-spin and design time on the semiconductor side.
Vel: You bring up a good point, but if you’re going to be prototyping something you’ve already lost a lot of lead time. We believe in virtual prototyping. As long as you can build a physical model, and be able to test that model virtually, then you have a virtual prototype—even without taking the first step in trying to understand what that bottomless component looks like. We’ve been doing this on the IC side for decades.
Yost: It depends on what part you’re trying to prototype. If you’re just looking at the antenna itself, we can understand from simulation the best way to design the beam patterns. We don’t have good enough models at 28 Gig and 39 Gig. The information is not there, and prototyping does provide a good way to get there faster than going out and spending years collecting channel models and channel emulation to get that into the software side. Both are important for different reasons.
Fitton: They’re both really important in a continuum solution. You start out with simulation and take it to a low level. I’m seeing a lot of people using transaction-level models—SystemC and TLM 2.0 models. You don’t want to do cycle-accurate, but you do want to do transaction-accurate models for how a chip works. Now your software developers can start writing code for it before it even appears on their FPGA or SoC. Every single 5G solution at the moment is based on FPGAs because there are no chips out there. Then you take it even further with the handset. But even with the infrastructure, you want to harden the functions where there is certain key flexibility. Everyone will keep some portion of FPGAs out in the field and in the infrastructure just so they can fix bugs and add new features. And then you tie all that back, because you’re going to need to model the next thing that comes along. So we have MCC (mission-critical communication) next or URLLC (ultra-reliable low-latency communication). And without all of those steps, it never works.
Koutsoyannopoulos: Today we have the capability to design a 64-antenna array very efficiently and very accurately. The question is how we move one level up, and we have to see the problem from the system perspective. We should keep along that path—that philosophy of expanding our model and simulation capability—to take into account more and more effects. The question is not how accurate the model of the antenna is, because it’s very accurate. The question is how the system behaves with that antenna. I agree that we definitely need to expedite the testing and prototyping cycles. However, that prototyping and testing shouldn’t be part of the design cycle. The design cycle should be fast and efficient with the tools, and the testing cycle should come after. The problem starts when we put the test in as part of the design and we end up with a very prolonged design.
Yost: It depends on how you define testing. We’re not involved in production or manufacturing. We’re involved in designing of the standards and coming up with algorithms. Our deliverables are IP or patents, instead of a product for manufacturing test. The idea of having a prototype is to take that system-level view without having to build a chipset to do it. It’s not necessarily how to design an antenna. It’s how we optimize the code book for what we put into that antenna, or how do we pick out which beams we’re actually going to be using before we ever even make the chipset, because we can test it in a real-world environment to see which ones do work best.
SE: We’ve got multiple iterations of 5G. The first one is actually more like 4.5G, where it’s sub-6GHz. Then we move to the next level, and then finally to millimeter wave. Each of these is a different phase. Where are we in terms of moving from one to the next, and how quickly will this happen?
Fitton: It depends where you’re living at the moment, both from a deployment and a spectrum standpoint. There are incredibly aggressive claims, and everyone claims their box is 5G-ready. In China, it’s all about IoT, and they are further ahead of everyone else in the world. Korea is really aggressive, as well. But it’s going to take a long time. There is a lot of talk about millimeter wave, but the real fundamental change will be in usage models. If you think about the connected car, that affects everything from signaling to the whole system aspect. And while it will create a huge amount of opportunity, it will have political ramifications because it will affect employment.
Vel: It’s not a ‘wait and watch’ technology. Everyone has jumped in with both feet. North America and China are probably the largest investors in 5G technology. Europe is a close third. The European standard did not believe in millimeter technology until North America and China jumped in. Now they’re following. All of the major players are aligned on new standards with 5G. If they don’t cooperate, 5G will be dead on arrival. It’s in the best interest of all companies to collaborate on 5G standards and to make this successful. The opportunity is huge for all of them. This is not a case where one company wins and another loses.
SE: It’s a full ecosystem play, right?
Vel: Yes, all the way from the backhaul providers to the handset providers to the makers of the chips and antennas. It’s in everyone’s best interest to collaborate across standards. Within the next three to four years, I believe this will be really begin taking off.
Yost: I agree it’s going to be a long rollout. Companies are already making a sub-6GHz 5G chip that will be in products in the next six months to a year. There’s a lot less infrastructure to roll out to sub-6GHz. We can start putting in massive MIMO technology and getting higher density. We’ll see that technology in the next one to two years. The game-changing applications will take longer and will be harder. But we will continue to make improvements. One of the providers I spoke with said it doesn’t make sense to put any more financial investment in 4G. They want to put the resources into 5G. Even in China, there’s the non-standalone version of the 6GHz standard and the standalone. China doesn’t want to implement the non-standalone. They want to move forward with the standalone standard and not deploy this kind of half-step standard. But it’s going to be different in every country. Some pieces will come faster than others. We’re probably not going to get connected cars for 10 to 20 years, for a variety of reasons. One panel I sat in on distinguished between automatic and autonomous.
Koutsoyannopoulos: Maybe in 2019 or 2020 we’ll have upgraded phones that will include the 4GHz band and 28GHz band, which is going to feel like an upgrade over LTE. There will be more bandwidth with existing applications. That will give us a stepping stone to start testing new applications that can leverage the bandwidth. Maybe from 2020 to 2023, we’ll start seeing the adoption of 38 to 40GHz, and maybe later 60GHz-plus, when we have a clear understanding of the applications or ecosystems mostly related to cars. But the biggest problem we have not yet solved as an ecosystem, and which is critical in the standard and for new applications, is latency. Today we’re sensitive to that only in a couple applications like videoconferencing. This is tiny compared to things that have been suggested as potential uses.
Fitton: I completely agree, and that is why the second wave of applications will take time. If you look at latency, we’ve gone from 1 millisecond to 0.2 millisecond. We have to go further.
Yost: If you just look at the physics of how long it takes a wave to travel from point A to point B and make the return trip, to get 0.1 milliseconds you have to be 100 kilometers away. To reduce that latency you also need to reduce the space in between, so now we have to deploy macro cells and small cells and have the infrastructure around to complete that.
Fitton: Yes, more density of cells with a lower range for each of them. But then you’re also pushing the processing capability out toward the edge. For ‘automatic’ driving or whatever application you’re using, that processing is probably best done right at the edge of the network. To enable those 5G applications you’ve got to do some edge computing. So now you’re more thermally constrained. Everyone is running machine learning applications using precision floating point. But if you can’t do it like that anymore, how do you push it toward the edge so you get tera-ops per watt rather than just thinking about tera-ops?
Modeling, Simulation, Prototyping, Fly-Wire Testing are 4 inseparable elements of the whole process; they’re intertwined pieces of a complex puzzle. Ingredients in a master’s kitchen for a Chef…