Real products are starting to hit the market, but this is just the beginning of whole new wave of technology issues.
By Kevin Fogarty and Ed Sperling
Edge computing is inching toward the mainstream as the tech industry begins grappling with the fact that far too much data will be generated by sensors to send everything back to the cloud for processing.
The initial idea behind the IoT/IIoT, as well as other connected devices, was that simple sensors would relay raw data to the cloud for processing through one or more gateways. Those gateways could be inside a company, a home, an industrial operation, or even a connected car. But it’s becoming apparent that approach is untenable because there is far too much data to process—even with higher-speed communications technology such as 5G.
“A PC will generate 90 megabytes of data a day,” said Tien Shiah, who runs HBM marketing at Samsung. “An autonomous car will generate 4 terabytes a day. A connected plane will generate 50 terabytes a day.”
Most of that data is useless. An autonomous vehicle will collect data from radar, LiDAR, and an assortment of cameras. Some of it will be used in the training algorithms to improve safety and overall performance of vehicles, which will be relayed into the cloud. Some of it will need an instant response to avoid an accident or to address a problem in real time, which needs to be processed and acted upon immediately and locally. And most of it will be discarded, like the video footage from security cameras.
If that pre-processing is done locally, far less data needs to be further processed in the cloud or some mid-range servers. The result is far better performance for less money and less power, enabling the kind of rapid response required by autonomous cars, drones, or even robots. And these are the key reasons why edge computing suddenly is getting so much attention. It moves compute tasks closer to the source of data, which in the case of autonomous vehicles ultimately may be within the sensor unit itself.
This also is important for AI, machine learning and deep learning applications, which may or may not be connected to a system in motion. The key with AI/ML/DL is to be able to do the inferencing piece on a local device, which improves security as well as performance. That adds a whole new spin on the edge computing model, though. The GPUs used to train these systems can run in parallel because they are working off a system of weights and floating point calculations.
Inferencing, in contrast, involves fixed-point calculations, which run best on ASICs, FPGAs or DSPs, and do not generally benefit from highly parallel architectures. The bigger problem for inferencing is throughput to memory, which is why some of these devices are being built using advanced packaging. Here it’s not just a matter of throwing more processors at a problem. It requires a systemic approach to design—and in some cases multiple connected systems—including optimizing throughput of some signals versus others. That affects the entire data path.
“Memory has become a bottleneck again,” said Frank Ferro, senior director of product management at Rambus. “When you look at processing right at the node, DDR isn’t keeping up. A number of applications that are emerging, whether that is AI or ADAS, require much higher memory bandwidth. This is where we’re starting to see new systems and memory architectures with 2.5D, HBM and GDDR6.”
Add to that the fact that many of these applications will be either battery-powered or have to live within highly constrained power budgets, and the difficulty of developing these kinds of devices begins to look even more challenging.
Evolving standards
For all of these edge devices, standards are in a state of almost constant flux. 5G, which is expected to become an integral component of edge design, is still at least a couple of years off, and that will likely be rolled out in multiple different pieces. So while the first implementations likely will be sub-6 GHz, future versions will utilize spectrum above 24 GHz, and ultimately up to millimeter wave.
In effect, this is a moving target, which makes some level of programmability essential. Unlike in the past, where an application processor could be developed for billions of units of smartphones, 5G will be used to link together many more devices than just phones. But not all of the radios used inside those devices will be the same. They will need to evolve with the specs, because some of these devices will need to last for a decade or more, rather than just a couple of years until the next smart phone is released.
“If you look at 3G and 4G, the markets were very cyclical,” said Mike Fitton, senior director of strategic planning at Achronix. “So early in the market cycle, they were able to use an ASIC to drive down the cost and power. With 5G, it’s moving from UHF to millimeter wave, so it starts out with non-standalone devices, and then standalone. But because of that, FPGAs are going to be around longer, and eFPGAs will start to make more sense.”
5G’s multi-phase adoption road map has helped eFPGAs gain a toehold in one of the mainstays of ASIC design. “There has been increasing interest in eFPGAs with 5G than before 5G,” said Geoff Tate, CEO of Flex Logix. “An FPGA allows you to get rid of the SerDes, which gives you a big power savings.”
But the fact that 5G is high frequency also means that signals don’t travel as far, or even through some windows, which makes communication with the cloud much less reliable until enough repeaters and base station beamforming technology is implemented. That, in turn, has a ripple effect on edge devices. It means that it will be years before 5G is reliable and ubiquitous enough to be able process more data in the cloud.
Alongside 5G, there also is an improvement in wireless speeds using other even shorter-range approaches. The new 802.11ax standard uses multi-user multiple input-multiple output (MU MIMO) as well as orthogonal frequency-division multiple access (OFDMA), which can average interferences across neighboring cells. The result is that edge devices can do more processing locally, further processing at an interim level, so even less has to actually be sent to the cloud and back.
“There are a lot of new high-end users that are looking at new requirements for uplink and downlink. These are creators, gamers, and collaborators, and they do need a higher-speed network with higher throughput and higher data rates,” said Sathya Subramanian, senior staff marketing manager at Marvell. “802.11ax also offers multi-user MIMO and OFDMA, which they can use seamlessly.”
Defining the edge
One of the biggest problems with edge computing, however, is that it’s a technology in transition. It’s being defined as it evolves. Today, you can’t actually order up purpose-built edge-computing products able to support a specific mix of IoT devices, infrastructure and computing requirements.
“Right now, edge computing is mostly just a lot of talk, but there is progress,” said Zeus Kerravala, principal analyst at ZK Research. “The partnership Nvidia announced with Arm, and the edge processors announced by Intel are both fundamental products purpose-built for the edge where you need to add power to do the processing on a device or gateway or other facility rather than sending it to the cloud.”
In fact, there is tangible progress in this segment. Nvidia announced a partnership with Arm on March 27 to integrate the Nvidia Deep Learning Accelerator architecture with Arm’s Project Trillium machine-learning platform to make it easy for chipmakers to add machine-learning to IoT devices.
Fig. 1: Partial list of companies currently supplying edge solutions. Source: IHSMarkit/Semiconductor Engineering
In February, Intel introduced a 14 new Xeon processors ranging from 4- to 18 cores with power requirements from 60 to 110 watts. Both the Intel and Nvidia/Arm offerings add more power closer to the endpoint, but neither is an ideal alternative to sending data back to the cloud.
And Qualcomm announced in February availability of its embedded platform mobile CPU for IoT devices, which support multicore processing for computer vision, artificial intelligence and immersive multimedia for virtual reality, digital signage, robotics and other applications.
Not every application will require either deep visual analysis or decisions made in real time, but those types of apps are becoming more common, and not just for autonomous vehicles or augmented reality and facial recognition apps on smartphones.
“If you’re a casino and you want to have machine-learning capability in cameras to do facial recognition in real time of potential cheaters, or if you’re a city wanting to use smart cameras to watch for stalkers in a park, you don’t want to be backhauling every image to the cloud to make that call,” Kerravala said.
One edge component not in short supply is the IoT gateway. On the consumer side, IoT gateways include things like Apple’s HomePod, Google’s Nest and, oddly, the announcement in February that Mozilla has kicked Project Things, which provides software and instructions to help anyone with a Raspberry Pi to build their own IoT gateway for their SmartHome devices.
The home IoT market eventually may outpace the industrial IoT, but right now the IIoT is setting the pace and the agenda. Demand is growing, according to IHS Markit analyst Julian Watson, for IoT gateways with edge-computing capabilities. Demand centers in three specific areas:
• Providing a bridge to low-power nodes that are not connected directly to the Internet, such as sensors based on Bluetooth Low Energy (BLE) or Zigbee.
• Filtering traffic, making decisions on what data should be processed at the edge and what needs to be sent to the cloud.
• Managing the security of those edge devices.
At a minimum IoT/edge gateways should be able to do the following, according Michael Howard, executive director, research and analysis for carrier networks at IHS Markit:
• Shrink the volume of raw data from IoT devices by de-duping and consolidating repetitive data;
• Convert data to a format that is compact and readable to upstream apps;
• Add data telling upstream apps what kind of data they’ll be getting, and from what kind of devices;
• Include information about how to organize the data and refine it.
“If you can’t do those things you’re pushing data upstream, and that just wastes time and bandwidth,” according to Howard. “If the gateway can’t refine raw data to something compact and useful, there’s no way the datacenter or other servers will be able to keep up with the data you’ll be getting at scale. Processing has to be done where the data originates, preferably more than once.”
All the major systems vendors are eager for a piece of the market, but demand for gateways is just picking up. The problem is more complex than collecting temperature data from a few sensors. Especially in the IIoT, every vertical market legacy SCADA and other automation systems are typically closed, proprietary, unfriendly to newer communications technology and impossible to get rid of quickly enough to avoid headaches for both chipmaker and device OEM.
Those devices also can be connected using almost any wired or wireless communications protocol of the past 30 years. “There is still a lot of wireline connectivity out there for things like in-building thermostat controls [and] a lot of the mundane, arcane factory automation that’s being done,” Howard said. “Automating today’s manufacturing processes is a big deal, and you don’t even have to have robots to make it challenging. It’s unsexy but it’s a real part of the market today. We’re in the early part of the market and this is a long trail. All eight of those LP-WAN protocols are deployed today and probably won’t be de-deployed for a long time to that one odd protocol that might live on in one box, so these upstream IoT gateways will have to know what to do with them.”
End users already taking the risk on making IoT installations work were willing to use LP-WANs such as SigFox or LoRaWAN, which tap unlicensed industrial, scientific, medical (ISM) radio frequencies for long-range low-bandwidth, low-energy-required network connections.
Because the spectrum is unlicensed, no carrier needed to be involved. End users could buy the equipment and set up their own IoT WAN links as if they were setting up a WiFi network, said Mike Demler, senior analyst at The Linley Group.
“It’s somewhat a matter of cost, but if you wanted to deploy SigFox in a building you’d get a SigFox router, a bunch of SigFox sensors and hook up the router to whatever your backhaul is,” said Demler. “LoRaWAN isn’t quite so much roll-your-own, but those beat low-power LTE to the punch, and it did the job for some people.”
Platforms, not standards
Of the mainstream computer vendors, Dell is probably the most obviously enamored of the gateway market, though HPE is also an aggressive player. Other key players include NXP Semiconductor, Huawei, Lantronix, STMicroelectronics, Aaeon, Nexcom, Cisco, IBM, Texas Instruments and General Electric.
Cisco helped popularize the expectation of the 50-billion-device IoT universe in 2011 and gets credit for coining the term “fog computing” to describe connections between the cloud and edge-computing facilities. The company is competing for more than just the gateway market, though. In addition to networking products designed to provide connections between the data center and edge—with processing, storage and security installed at key points at the customer’s preference—it also introduced a narrowband IoT management platform in February.
In addition to fog computing products, Cisco sells an IoT asset-management platform aimed at helping IoT managers keep things together. Late last year it introduced software to manage hybrid IT environments that extend to the edge to help customers get started with IoT integration.
Also not in short supply is the IoT platform — software that manages and communicates with devices and allows other applications to run on them or extract data from them. Mature markets, according to a May 2017 McKinsey & Co. report, usually have one or two dominant players (Windows and MacOS or Intel and Nvidia) with a third trailing behind and sometimes threatening one of the leaders. (Linux or Arm). In the extremely immature IoT market, there were 450 platforms, 25% higher than in 2016 despite 30 of the names having disappeared, according to a June 2017 report from analyst firm Iot-Analytics. The vast majority focus on business markets, especially manufacturing/industrial, smart cities, energy and mobility.
“If there are 100 IoT platforms, then there is no platform, just aspirants,” according to the McKinsey report.
Conclusion
While there is confusion about exactly what is needed at the edge, there also is plenty of opportunity ahead across a variety of markets and technologies that intersect with the edge.
“If you go back a few years, everyone you spoke to in the chip industry was wondering what comes next,” said Simon Segars, CEO of Arm. “They had gone through many years where mobile was driving everything and there was tons of design work to do, and then mobile growth rates started flattening and everyone was pulling their hair out, wondering what is the next big thing. Now we have so many next big things it’s hard to know where to start. There are new communications protocols, whether it’s 5G, LoRA, Narrowband IoT, and new technologies which in themselves require a lot of innovation in semiconductor devices. You’ve got the world of AI driving chips in the cloud. There is inferencing at the edge, which is driving innovation in designs that eventually will underpin all of these technologies. The cloud itself is exploding, and there seems to be no end in sight there. And that is changing who is doing these leading-edge designs.”
On the edge, the only thing that is certain at this point is a lack of clarity. And that is driving a lot of companies to plant a stake in what promises to be an interesting opportunity when the dust ultimately settles.
Related Stories
Processing Moves To The Edge
Definitions vary by market and by vendor, but an explosion of data requires more processing to be done locally.
How To Secure The Network Edge
The risk of breaches is growing, and so is the potential damage.
IoT Myth Busting
How cost-sensitive are IoT edge devices, what are the real drivers for this industry, and what is the impact on EDA and IP?
Leave a Reply