Racing To The Edge

The opportunity is daunting, but so are the challenges for making all the pieces work together.

popularity

The race is on to win a piece of the edge, despite the fact that there is no consistent definition of where the edge begins and ends or how the various pieces will be integrated or ultimately tested.

The edge concept originated with the Internet of Things, where the initial idea was that tens of billions of dumb sensors would communicate through gateways to the cloud. That idea persisted until last year, when there was widespread recognition that even the fastest communication infrastructure available was still too slow and inefficient to stream video and other types of data to some remote location for processing, sorting, and storage—and then deliver some amount of information back to the device.

This is particularly true for automotive applications, where image sensors, LiDAR and radar are generating an estimated 15 terabytes of data per hour. That number could rise significantly higher, too, as 360º imaging is added into vehicles and more sensors are included for accident avoidance or prevention. As a result, there are now expected to be various intermediate steps, all of which will fuel demand for high-performance chips.

The result has been a stampede among systems vendors, established chipmakers, and a slew of startups seeking a place in this market. There are good reasons for this enthusiasm:

  • This is a brand new untapped market. At this point no one owns it, and it can generate a lot of business upstream and downstream.
  • Unlike the IoT, this market will support more expensive systems that use both off-the-shelf and custom chips.
  • There is no shortage of venture funding available, and a multitude of both open and proprietary instruction set architectures are available.

Still, the idea of how computing gets partitioned and where data gets processed has created a fair share of confusion.

“If you go back a few years, everybody knows everything is going to cloud, everything’s going public cloud,” Paul Nash, group product manager for the Google Compute Engine in the Google Cloud. “But is it going to the edge? Where is it going? People are beyond assuming that any single definition of what the right way is will be accurate, so it depends a lot on where customers are, what their workloads are. And it really is this kind of multi-cloud thing where workload-by-workload and business-case-by-business-case, customers are trying to make the right decision about where things go.”

Cloud providers and large system vendors are calling this a hybrid cloud approach. Others are differentiating between the edge and the cloud. And within each of these, multiple different segments make it difficult to predict where this market is going. What is becoming clear, though, is that sending all data to hyperscale clouds such as Google Cloud or Amazon Web Services is grossly inefficient for some applications.

“Not everything will go to the cloud,” said Lip-Bu Tan, president and CEO of Cadence. “With some of this data and the analytics, if you send it to the cloud it’s too slow. This is why the edge is going to take off. The edge will become a very intelligent edge, and you will see a scale up of storage and networks. The hyperscale cloud will continue to explode because of all the data being sent there. But for automotive and industrial applications, much of this will be at the edge in what is basically a mini-cloud and it’s going to be very power-efficient. So there will be an automotive cloud and other vertical clouds. That’s why I see the edge as the next big thing.”

It’s also a somewhat fluid concept because there is no clear demarcation between the edge and the cloud. Lisa Davis, vice president of the data center group at Intel, predicts that computing will become much more granular, depending upon markets and individual use cases. “Most enterprises will have at least eight different clouds within their environments,” she said.

So what actually constitutes the edge? Wally Rhines, CEO Emeritus at Mentor, a Siemens Business, breaks it down into three distinct areas—edge devices that are collecting data, edge gateways, and edge systems in server centers. Each of those could utilize a variety of chips, from FPGAs to custom ASICs and different packaging options, which will be used over millions of custom gateways, and security will need to be different for each segment.

“There will be millions of nodes that are collecting data without filtering, so you’re going to see more and more security being pushed down to the chip level,” Rhines said. “The cloud providers have security algorithms because they don’t want anyone getting into the core of the cloud with out pre-processing. That is what the big systems vendors call the edge. We tend to look at collection points at the factory and business as edge nodes for the IoT.”

How the edge ultimately meshes with the cloud remains to be seen. The general consensus among large tech companies is that the future will hinge on some sort of hybrid model, but that the edge will become increasingly important.

“If you think about other spaces, if you think about public sector, cities, enterprise, there is a whole set of very compelling use cases for the service provider edge, maybe in conjunction with the enterprise edge,” said Michael Beesley, vice president and CTO of Cisco’s Service Provider Business Unit. “So it’s not replacing central cloud, but augmenting it with regard to the security controls, geo location of data, compression of data. You think about smart cities or high-bandwidth IoT applications, and there is a need for a layer of real-time processing and then a layer of data aggregation and compression, because at some point if all that raw data is heading to your scaled up data centers, we won’t be able to get that data. We certainly won’t be able to have the transport abilities to get that data into that data center. We probably won’t have the ingestion abilities inside that data center for all the raw data. So there are many use cases that can take advantage of pre-processing and data reduction at the edge, as well.”

Safety lives at the edge
“The edge includes a lot of the stuff where people are most concerned about things that can kill you, like cars and robots and medical devices,” said Kurt Shuler, vice president of Arteris IP. “These things can kill you two ways. One is a cosmic ray and the traditional functional safety use case, where it flips a bit and then it goes awry. The other way is everything works as intended, however what it does and what it decides to do from its neural net application is the wrong thing. There’s not a cosmic ray. There’s not a hardware safety problem. The safety of the intended function is bad. (There is a new specification out for that, ISO/PAS 21448:2019 Road Vehicles — Safety of the Intended Functionality.)”

This is where the edge gets complicated. Assisted or autonomous vehicles need a certain amount of internal processing and external communication, whether that is to another car or an edge server or cloud.

“There are a lot of differences in assumptions in the algorithms used to drive a car,” said Jeff Phillips, chief solutions marketer for automotive at National Instruments. “You want a smooth ride but you also want 100% safety. A lot of decisions need to be made in the algorithms themselves. In the time between now and full autonomy, there will be more services added so you don’t have to take over driving. But in all of these cases, reliability will be a differentiator, and culpability will lie at the OEM.”

All of this happens at the edge, and assisted driving is one of the key edge applications. But how all of this gets split up has a big impact on not only processing and reaction time, but also security. Until the edge is better defined, understanding where and how to implement security is difficult.

“There are some building blocks that are being used to make this work,” said Arteris IP’s Shuler. “For instance, if someone is using Arm, there is TrustZone, which can be thought of as a secure virtual machine. The secure section can access outside, but nothing from outside can access the secure section. We have things that we can do in hardware at the interconnect level, where we can create essentially one-way gates/firewalls where you can get in this way but you can’t get back out into the secure area. It’s just like a firewall that you have on your WiFi router at home. It’s the same concept. We let certain things through, other things you don’t let through. You take hundreds or thousands of these and they’re created for you, but you set some baseline rules for how they act and thereby create a hardware-based security system. That is part of an overall security strategy, and from a security standpoint, there are two things that have to happen in AI. One, involves edge inference primarily. There’s data center, and there’s edge. There’s the training of the neural net and using the neural net for inference, which refers primarily to edge inference. These are the devices, these are the Echos in cars and industrial robots and whatever.”

Reliability and interoperability at the edge
Another piece of the puzzle involves reliability of devices and systems operating at the edge. This is obvious for automotive, medical, and industrial applications. What is less obvious is how well supporting technologies such as 5G communications will impact their operation. An accident can occur because information housed in an edge cloud was either too far away to provide a response in time, or it couldn’t process the information fast enough.

The challenge is that multiple pieces of the edge need to work reliably together, and not all of them are progressing at the same speed or with the same goal in mind. As a result, when various quality control steps are combined, such as test, coverage becomes more difficult.

“If you look at digital and mixed signal, different blocks work together and a universal ATE can cover many of those in one platform and still ensure the devices are fully functional,” said Shu Li, business development manager at Advantest. “But as the technology is advancing, we are seeing the combination of DC, digital, RF and 5G into one device. So the test programs get bigger and the cost of development goes up. You have to worry much more about the quality of the test program and time to test in production increases. To get coverage higher, you need to spend more time. That becomes a tradeoff.”

This is why the edge is so critical in this equation. It’s also one of the reasons that companies are beginning to embrace standardization in this market. This is already happening in specific markets such as automotive, where ISO 26262 has risen out of obscurity to the point where every chipmaker understands its impact. With 5G and other communications standards, though, there are still unanswered questions about how to test arrays of antennas, despite the fact that they may be a critical element in the safety of edge devices.

“Standardization is actually going to be pretty important as the edge evolves and matures for two main reasons,” said Cisco’s Beesley. “You’ve got a multitude of technological providers and vendors. There’s going to have to be some level of standardization and interoperability and mix and match abilities between all the technological piece parts that we all offer to to our customers, and that’s across the stack. That’s the compute infrastructure, it’s the virtualization and orchestration layers, it’s standardization and interoperability around the VNF that might run on top of it.”

The second piece involves applications vendors. “If I’m a gaming company, I don’t want to tweak, orchestrate and manage my edge workloads to provide that real-time gaming on a per-network basis,” he said. “I just want to put that front-end node at the ‘edge’ and I don’t want to get into the details of each and every service and every regulatory environment on a country basis. So being able to standardize on the northbound offerings to value-added app vendors is also going to be very very important to get beyond the infrastructural workloads. The infrastructure workloads are going to happen anyway, like virtualized RAM, distributed packet core user planes, caching CDNs, you know someone who is building a network is motivated to put those at the edge anyway because it gives better experience to the customers and lowers expenses. But the real jackpot is going to be attracting third-party app publishers. We have to do a good job in making that seamless and cohesive and somewhat standardized across infrastructures, countries and networks.”

Conclusion
There is no question that the edge will be important, no matter how it is defined. What remains to be seen is how well companies work together to integrate the various pieces that ultimately define the edge.

At the very least, there will be massive churn and change as these pieces are defined, and as different architectures and tradeoffs are integrated into edge devices, edge clouds, and everything else in between. The question now is whether companies can collaborate to make significant use of the data that is generated at the edge, and how quickly that will happen.

“We all talk about lots of data being created,” said Ravi Pendekanti, senior vice president of server solutions at Dell. “Stats say only 2% is being utilized. Frankly, five or seven years from now if we roll back to the same place, if you say 20% to 30% is being utilized, that will be a big change. Every year we are collecting twice as much as the prior year, so it’s a huge change.”

—Ann Steffora Mutschler contributed to this report.

Related Stories
AI Chip Architectures Race To The Edge
Companies battle it out to get artificial intelligence to the edge using various chip architectures as their weapons of choice.
Spreading Intelligence From The Cloud To The Edge
Explosion of data is forcing significant changes in where processing is done.
Planning For 5G And The Edge
Experts at the Table, part 2: Understanding 5G’s benefits, limitations and design challenges.
FD-SOI At The Edge
Experts at the Table, Part 3: Pushing fully depleted SOI to 10/7nm for AI at the edge; what’s missing in the supply chain.
IIoT Edge Is A Moving Target
Industrial Internet Consortium defines use scenarios, not standards, of IIoT edge computing.
Visit our Edge Knowledge Center
All about edge computing: latest articles, blogs, white papers and videos



Leave a Reply


(Note: This name will be displayed publicly)