Edge Complexity To Grow For 5G

Increased interdependence of technologies will drive different architectures and applications.

popularity

Edge computing is becoming as critical to the success of 5G as millimeter-wave technology will be to the success of the edge. In fact, it increasingly looks as if neither will succeed without the other.

5G networks won’t be able to meet 3GPP’s 4-millisecond-latency rule without some layer to deliver the data, run the applications and broker the complexities of multi-tier Internet apps across an unpredictable array of intelligent devices. And the edge, which initially was developed as a way for IoT managers to retain control of their data, will not function without ultra-fast wireless communications.

Investments in both of these areas are growing, and so are the stakes for making this all work. The need to send answers back from remote cloud apps to end users fast enough to stop cars from crashing into each other is probably still be a stretch. But moving the cloud closer to the data source, and prioritizing the kinds and amount of data that requires an immediate response, is an increasingly important trend. In fact, those factors are beginning to alter chip design as the entire industry begins sorting out which architectures work best for which applications.

“It all depends on what kind of functionality is needed by the device,” said Nimish Modi, senior vice president of marketing and business development at Cadence. “If it’s a car, there may be a ping-pong kind of communication as a car is traversing a road. But the infrastructure capabilities of the edge will determine what is the functionality of these things. The level of compute is increasing tremendously. The amount of data that is being generated at the edge is growing every day, and the signal-to-noise ratio is not very high. There is a whole bunch of data that is useless. But there also is the stuff that’s important and which requires fast localized decision-making—and it needs to be secure. And then there is edge storage and 5G, which is going to be prevalent. It’s a system-level capability that’s needed at the edge.”

Those layers are requirements are beginning to impact what kind of hardware is used where, and for which application.

“A CPU is too slow and a GPU uses too much power,” said Ellie Burns, director of marketing for digital design implementation solutions at Mentor, a Siemens Business. “This is why we’re seeing more TPUs. An all-purpose generic solution uses too much power and it’s too expensive, because moving data around takes a huge amount of memory. The future will include little arrays doing calculations and implementing of algorithms on ASICs for better power and performance.”

Others point to similar trends. “There is a big shift toward embracing heterogeneity,” said Kent Orthner, senior director of systems at Achronix. “You need different blocks because they are good at different things. This is a place where FPGAs play well. They are specialized, but you don’t have to keep that specialization. If you have a given function and you don’t need programmability and flexibility, an ASIC is faster. But an FPGA can be programmed and reprogrammed often, and you can reprogram it on the fly. And because these can be smaller than ASICs, you can have a multitude of different functions that you are accelerating.”

5G becomes a critical element here, because it is a way of accelerating the movement of data between devices. This may be between the end device and the cloud, but if there is a lot of data it is far less costly in terms of power and infrastructure to process more data locally. This is why the edge is becoming so critical. But exactly where that edge device sits depends on the application, and in the case of China, government mandates.

In all cases, though, edge architectures are still being defined and customized. In automotive, for example, the edge may be next to the highway, but only some of the processing may happen there because connectivity will be inconsistent.

“If you’re relying entirely on 5G communication and you are disconnected from that 5G, then you have a problem and the car has to stop at the side of the road,” said Burkhard Huhnke, vice president of automotive at Synopsys. “On the other hand, if you have some connectivity, that allows you to look further around a corner. You can’t access that from the embedded world. In Europe and America, the emphasis is on more embedded intelligence, so you are not sending an incredibly intense data stream back and forth, where latency will be a challenge.”

It gets even more complicated in the automotive world than in other markets because of safety-critical circuitry.

“You may have to reboot part of the chip for a failed operation, while keeping the rest of it operating in a safe state,” said Kurt Shuler, vice president of marketing at Arteris IP. “If you think about the space shuttle or a Boeing 777, the black boxes are 20 pounds. You can’t have that in a car. There is a lot of functional safety being done at the microprocessor level to save cost. That can be used to spy on what’s happening at the system level, so if there are problems you can isolate them and in a safe state and fail gracefully. If there is a transient error, you reboot.”

While much of this functionality happens within the automotive system, it also has to communicate to the outside world in case there is a problem. And with automotive, all of this needs to happen wirelessly and close to real-time in order to prevent traffic jams or accidents. Sending all of this data all the way to the cloud is far too slow, even with a 5G connection.

It also varies greatly by function even within a car. “There are a range of applications and requirements,” said Cheng Wang, senior vice president of engineering at Flex Logix. “So you have LiDAR, radar and vision applications, which may be 2 megapixels at 30 frames per second. But you also have a low-frame-rate backup camera, where the power requirements are more sensitive. So at the low end you may be using MobileNetV2 SSD, and the high end you might be using YOLOv3. There is a 100X gap in performance between both of these.”

Learning from history
Still, not all of this is a brand new challenge. The tech world has had years to experiment with the edge concept. Initially it was popularized by companies that found adding a gateway device or server could cut down on bandwidth use and cloud cycles. Rather than sending everything to the cloud, it was more efficient to consolidate data closer to the collection point. But the edge is taking on a whole new sense of urgency as the amount of data generated by sensors continues to grow.

“Since you don’t really want a thousand sensors to each report the temperature six times per minute directly to your cloud application, it seems like a good idea to have a small device that can consolidate the data or just send a result,” said Shane Rau, analyst at International Data Corp.

The same term also has been used by carriers and networking providers adding intelligence in other layers of network infrastructure, as well as in endpoint devices such as security cameras and smartphones.

If the main concern in edge computing was data from billions of IoT devices flowing upstream unnecessarily, the fear among 5G developers was way too much data coming downstream too slowly to fulfill the 4-millisecond-latency requirement in the 5G specifications for mmWave. Too much latency would torpedo use cases in which apps based in a remote cloud could take over as the driver in an autonomous vehicle, or in other cases requiring real-time response.

Getting a ping back from another network node within 4 milliseconds would be simple but useless. A useful response requires more than just low latency.

“Latency is important,” said Frank Schirrmeister, senior group director for product management and marketing at Cadence. “I have a friend who does a demonstration with a remote-operated robot that is very stable when the latency is five milliseconds. But you push the latency up and up eventually to 40 milliseconds and it gets more sluggish and eventually falls over. Siri had issues with my German accent. I literally had to switch her off at one point because Siri thought I was insulting her. But you can put Siri on your Mac at home and the voice recognition gets a lot better because you don’t have to send it to the cloud and you have local compute. You need to offload intelligence from the device to the cloud edge. You need to have the data available locally. But you need the algorithms in place so you will have that at the cloud edge when you need it. You have to be very versatile, but it’s an architecture that is still under discussion. It seems very application-specific, but we don’t yet know what the optimal balance is to make it all work.”

Already several names for this approach exist—decentralized data, data delivery networks, edge clouds—none very specific and none proven effective. “All are making the effort to distribute intelligence and data to the points within the infrastructure that make the most sense,” said Ajit Paranjpe, CTO at Veeco. “The latest thing is called ambient computing—where the users interact with the world around them. It’s not a single thing. It’s a movement. Augmented reality is part of the puzzle, but just one piece. The system allows you to deliver information but also to regenerate data and present it. That requires a huge number of new devices to provide that user experience. Many different puzzles have to be solved to achieve it. That’s going to drive a lot of new hardware designs in the near future.”


Fig. 1: 5G networks are a puzzle that is still to be solved. Source: Cadence

Any workable design still will require a distributed data and IT infrastructure that is much different from the current focus on consolidating as many functions as possible in a core data center.

“It’s not practical to move all the data from the cloud to the end device and back again within that small a [latency] window,” Paranjpe said. “I suspect leaving all the data crunching to the end device, even something with as much compute as a smartphone, would be practical. It will probably be a combination of edge computing and smart devices, where different levels of data are trapped between the cloud and the edge so it is there when it is needed.”

A certain amount of confusion surrounds what ‘the edge’ is and what devices qualify as endpoints, targets for inference accelerators and machine-learning apps, or are part of the data infrastructure, said IDC’s Rau.

“I define the edge as the first two hops of the Internet,” Rau said. “The endpoint device would be first—things we would have called embedded systems long ago and are roughly, collectively defined as the Internet of things. The next hop is in the newer section of the Internet—the edge infrastructure, which is where you find IoT gateways and edge servers that may act as aggregation points for data coming in from endpoints and may do some analysis or communication or other, more application-specific functions, or have some inference capability built in. Some of that data may continue through to the cloud through the communications infrastructure, which includes mobile-edge computing and base stations and cellular access points that may have more or less the same kinds of functions, but are coming from companies more focused on communications than computing. So there will be some blurring of the lines as time goes on. The picture is very diverse, very fragmented.”

The mobile edge isn’t just a communications interface anymore, according to John Smee, CTO of Qualcomm, who outlined his view of the industry’s direction during the recent 5G Summit event at the IEEE’s International Microwave Symposium in Boston. “We are looking at ways we can bring the IoT and 5G together—use cases where wireless can get things working together more economically, not just in seeing how we can bring in a solutions that can be deployed and not need a wired backhaul.”

The range of frequencies available in 5G could allow Qualcomm partners to address IoT devices, smart factories and other markets in which it may be possible to sell solutions that include both compute and communications, not just one or the other, he said.

“We are very optimistic about continuing to add more range of 5G devices—not only cell phones, but vehicles and factories and wearables, head-mounted displays,” Smee said. “We talk about the cloud and about the edge cloud and about device processing, and at Qualcomm we continue to push the boundaries on the device in terms of the ability to support machine learning in the most effective way.”


Fig. 2: 5G tradeoffs and issues. Source: Qualcomm

The success of smartphones, smart security cameras, smart speakers and other inference-enabled devices shows that edge devices can be effective, intelligent clients for cloud-based systems. This is a good indication that the kind of multi-tier application development that has been typical in the enterprise also can work in an IT infrastructure, which has a cloud application at one end and a smart tractor or camera or windmill at the other, backed up by edge servers tuned to the needs of one specific application, according to Carlos Macián, senior director of AI strategy and products at eSilicon.

Integrating sensors, inference accelerators and a range of other functions can be tricky for hardware designers, especially if they hard-wire too much of the application’s functionality into the inference chip or forget about the natural limitations of a specific form factor.

“Battery-powered devices can do everything you can do with a CPU if you want, but you may run into performance issues right away if you try to make it do too much heavy lifting or don’t pay attention to using the battery efficiently,” Macián said. “Other than the battery and the form factor, there is no real difference with any other kind of datacenter integration if you don’t get bogged down with latency or whatever, because you put too much on the cloud and have to wait, or too much on a device that can’t handle it. It will get even more interesting as some of these AI startups bring different architectures to market and we can see how they fare.”

Conclusion
For now the priority will be figuring out the best design of wireless networks that will rely on base stations, small-cell access points, femtocells and picocells to get sufficient local coverage. It also will require an understanding of how to structure the servers, networking gear, storage and other resources necessary to provide the right compute resources to the right parts of the 5G network at the right time to beat the 4-millisecond latency barrier

“We talk about these three legs of the 5G triangle – low latency, the number of devices and high bandwidth,” said Cadence’s Schirrmeister. “I am skeptical that the latency of a few milliseconds will be immediately available, because to do that you have to be very versatile in your design and get the data sets right, and make sure the data is right where you can get it in less than a millisecond. That will have a significant impact on how the networks are designed. Right now the honest answer is to say there is a large set of architecture decisions still under consideration, and it’s not clear what the result will be.

Related Articles
Data Confusion At The Edge
Disparities in processors and data types will have an unpredictable impact on AI systems.
5G Design Changes
How to get data from the core of the network to the edge.
Gearing Up For 5G
This new communications standard could transform architectural decisions across the industry, but not right away and not necessarily in obvious ways.
5G Knowledge Center
Top stories, special reports, white papers, blogs and videos about 5G
Edge Computing Knowledge Center
Top stories, special reports, white papers, blogs and videos about Edge Computing



Leave a Reply


(Note: This name will be displayed publicly)