Optimizing 5G With AI At The Edge

5G is necessary to deal with the increasing amount of data being generated, but successful rollout of mmWave calls for new techniques.


AI touches our lives in many different ways, and while some AI-enabled applications are highly visible, like the increasingly popular Amazon Echo and Google Home voice-controlled intelligent digital assistants, others are less obvious. But by no means are they less important.

For example, AI techniques are essential to the successful rollout of 5G wireless communications. 5G is the developing standard for ultra-fast, ultra-high-bandwidth, low-latency wireless communications systems and networks whose capabilities and performance will leapfrog that of existing technologies.

5G-level performance isn’t a luxury; it’s a capability the world critically needs because of the exploding deployment of wirelessly connected devices. A crushing amount of data is poised to overwhelm existing systems, and the amount of data that must be accessed, transmitted, stored and processed is growing fast.

5G needed for the upcoming data explosion
Every minute, by some estimates, users around the world send 18 million text messages and 187 million emails, watch 4.3 million YouTube videos and make 3.7 million Google search queries. In manufacturing, analysts predict the number of connected devices will double between 2017 and 2020. Overall, by 2021 internet traffic will amount to 3.3 zettabytes per year, with Wi-Fi and mobile devices accounting for 63% of that traffic (a zettabyte is 12 orders of magnitude larger than a gigabyte, or 1021 bytes).

The new 5G networks are needed to handle all of this data. The new networks will roll out in phases, with initial implementations leveraging the existing 4G LTE and unlicensed access infrastructure already in place. However, while these initial Phase 1 systems will support sub-6GHz applications and peak data rates >10GBps, things really begin to get interesting in Phase 2.

In Phase 2, millimeter-wave (mmWave) systems will be deployed enabling applications requiring ultra-low latency, high security, and very high cell edge data rates. (The “edge” refers to the point where a device connects to a network. If a device can do more data processing and storage at the edge – that is, without having to send data back and forth across a network to the cloud or to a data center – then it can respond more quickly and space on the network will be freed up.)

AI and 5G are perfect partners
AI functionality is key to edge computing because it provides for more effective control of networks, cells and devices. Without it, many 5G applications that rely on edge computing simply couldn’t be implemented, wouldn’t work well, or would cost too much to deploy.

Take the case of adaptive beamforming, where signals from phased array antennas are combined in ways that increase signal strength in a given direction. It’s important for 5G applications because while spectrum availability in the mmWave frequency range (30GHz – 300GHz) is nearly infinite, signals in these wavelengths are attenuated by atmospheric absorption which limits their usable range to about 300 meters. They also have difficulty penetrating buildings and foliage.

In the past, systems leveraging mmWave frequencies were built accepting these limitations, but that also limited their application. The control of adaptive beam-forming antenna arrays used for mmWave 5G communications is critical to optimizing their operation and performance, and so with the advancement of semiconductors and faster digital signal processing, sensing systems combined with AI can be used to control them. This will lead to dynamically optimized base stations and computing resources which better accommodate changing user needs and environmental conditions. Without AI, this would be much harder to achieve.

Smart surveillance cameras
Another way in which AI conserves network resources is its role in the growing use of “smart” surveillance cameras, which make use of diverse semiconductor technologies. More than 120 million IP (internet protocol) cameras were connected to networks globally in 2016, for use in a wide range of applications.

Many of these are so-called “smart” surveillance cameras. (In one notable instance recently, smart surveillance technology enabled police to pick out a wanted man among a crowd of 60,000 concert-goers.)

Without AI to enable edge processing of most of the data generated by a smart camera, though, networks would be overloaded. A single high-definition IP smart camera generates a video stream of 10Mb of data (or 30 frames) per second. Multiply that by the millions of such cameras added in recent years, and the network bandwidth required just for this application would be over a petabyte per second (1015) ─ clearly impractical.

AI to the rescue at the edge
Moreover, processing this data in the cloud would be hugely expensive with current technologies. The only real answer is to compute at the edge, using AI techniques for object recognition, gesture detection and classification, and only send minimal metadata over the network.

One might think that the most advanced, leading-edge semiconductor technologies are required to do this, but in fact a number of processes come into play. GF offers the industry’s broadest set of technology solutions for a range of 5G and edge-connected applications, including mmWave front end modules (FEMs), standalone or integrated mmWave transceivers and baseband chips, and high-performance application processors for mobile and networking.

For example, GF’s RF SOI, SiGe and FDX FD-SOI offerings are designed to serve applications ranging from sub-6GHz to mmWave frequency bands. RF SOI and SiGe solutions deliver an optimal combination of performance, integration and power efficiency for FEMs with integrated switches, low noise amplifiers and power amplifier applications. FDX offerings are well-suited for the next generation of connected devices such as smart cameras which require ultra-low power technology with intelligence and wireless connectivity built in.

Clients can take advantage of the back gate body-biasing capability of FDX that can be used to dynamically increase performance when needed for image processing, AI/Machine learning, or controlling leakage when a system is in standby. The FDX ecosystem of IP partners includes optimized IP for on-chip  power management, radio sub-systems, low voltage SRAM, instant-on MRAM, eNVM and FPGA blocks for the highly integrated flexible systems-on-chips (SoCs) needed for AI-enabled edge computing.

FDX SoC for future commercial IP cameras. (Source: GF)

Connected intelligence
Traditionally, the industry has viewed networks non-holistically, on a transactional basis and from the separate and distinct viewpoints of computation, storage and data transport.

But if we now think about adding many edge-connected devices with sensor collection capabilities to the network, we begin to see the value of creating networks with smart sensing capabilities, whose data can be collected and become the basis for intelligent and time-based decisions that improve and optimize the services provided by the network.

This is what we mean by Connected Intelligence – the ability to sense, decide and act upon information collected from devices/sensors connected to the network to create the ultimate user experience.

The addition of AI engines to augment this “sense, decide and act” approach to network optimization can create a very powerful framework to best leverage available network assets.

Underneath it all is the realization that AI-enabled 5G mmWave networks will depend on the advancements and innovation in semiconductor technologies. No single technology solution will serve all potential applications. It’s going to require a range of technologies to make these next-generation applications work together seamlessly and maximize their potential.

Leave a Reply

(Note: This name will be displayed publicly)