From Cloud To Cloudlets

Why the intersection of 5G and the edge is driving a new compute model.

popularity

Cloudlets, or mini-clouds, are starting to roll out closer to the sources of data in an effort to reduce latency and improve overall processing performance. But as this approach gains steam, it also is creating some new challenges involving data distribution, storage and security.

The growing popularity of distributed clouds is a recognition that the cloud model has limitations. Sending the growing volume of end-device data to the cloud for processing is resource-intensive, time-consuming, and highly inefficient.

“There is a huge amount of data being created every day,” said Lip-Bu Tan, CEO of Cadence, in a presentation at the recent Cadence Live. “All of this data needs to be transmitted, stored and processed. All of this requires high performance compute, high-bandwidth transmission and high-density storage. This is an exciting time for innovation in semiconductors in architecture, design, EDA, IP and manufacturing ecosystem.”

Tan noted that 90% of all data that exists today was generated in the past two years, and 80% of that is video or images. Of that amount, only about 2% of that data is being analyzed today. “There’s a huge opportunity to analyze that data,” he said. “That will drive new business models for all the different verticals.”

Much of that data needs to be analyzed locally. This is a marked departure from the original IoT concept, which assumed that 5G millimeter wave technology would be provide enough bandwidth and speed for tens of billions of IoT devices to connect to the cloud for nearly instantaneous results. Even under perfect conditions using mmWave, it takes too long. And as engineers working with 5G mmWave have learned, that technology isn’t just a speedier version of 4G. Signals don’t go through windows or around corners, they are easily interrupted, and they attenuate quickly.

As a result, the focus for mmWave has shifted from nearly ubiquitous small-cell implementations outside and inside of buildings, to better-defined infrastructure closer to the data sources. It also has forced a rethinking of what exactly 5G will be used for, namely line-of-sight communication for shorter distances, with some ability to bend around objects using beamforming. That makes it a viable option for connecting many more devices to edge-based servers, and one that is being heavily promoted by telecommunications companies, cloud providers, and chipmakers alike.

“We’re seeing the need for really high connectivity for up to 2.5 million devices in 1 square mile,” said Mallik Tatpamula, chief product and technology officer at Ericsson, said during a Simulation World presentation. “This is a paradigm shift toward micro-data centers. This is the decentralized cloud, where you transfer data to the closest locations.”

Alongside of 5G, there is a push to reduce the amount of data by making sense of what is useless and what is valuable closer to the source. That requires a fair amount of intelligence at the edge, particularly in safety- or mission-critical applications, where data needs to be scrubbed much more carefully so that important data isn’t discarded. From there, the data can be further processed locally or remotely and stored wherever it makes sense.

“When it comes to the pure economics of storage, a centralized cloud is nearly always going to deliver the best result,” said Brian McCarson, vice president and senior principal engineer in the IoT Group at Intel. “However, that’s not the only business constraint faced by all companies. Some companies are trying to deliver an immersive media experience. The more delay and lag you have, the less immersive it feels. And one way to overcome that lag is to minimize the number of data transactions in order to get that media processed. Every time you have to go from one cell tower to another, or from one centralized data center to another, that’s a tax you have to pay in latency. And it’s a tax you have to pay in cost, as well, because networks aren’t free. So in some circumstances it makes a lot of business sense to place that data center closer to the user.”

The cloud is essential for computationally intensive applications, such as training algorithms or drug research, but it also has some drawbacks in terms of latency and privacy. Moreover, the performance of edge devices is improving dramatically as specialized processing elements and heterogeneous designs are designed in, reducing the need to send everything all the way to a remote data center.

“You cannot afford latency to the point where it goes to the cloud and back because the distances are too long,” said Rita Horner, senior staff product marketing manager at Synopsys. “Even with fiber optics, there are so many hops that it doesn’t work. A lot of data can be processed closer to the source and it can be done faster, which is great for IP providers because there are more opportunities to make chips. Mini-clouds also need to have a certain level of processing and storage. They have to have attached storage, and it has to be low-latency. And potentially, they still may have to connect to the cloud, so there is an opportunity for high-speed interconnects, too.”

Ironically, data centers originally started out at the data source. They became increasingly centralized into clouds over the past decade, which were so power-intensive that these data centers were often located near hydroelectric plants. Now they are being dispersed closer to the sources of data, and they are becoming much more diverse and independent as companies begin to customize their solutions.

“There is an acceleration of technologies for neural networks,” said Kris Ardis, executive director at Maxim Integrated. “We’re seeing a lot of vision applications, such as facial recognition, in things like security cameras, robots — not necessarily completely autonomous navigation, but navigation assists to go around objects, do inspections and avoid collisions.”

All of those require much faster response time than sending everything to the cloud. “There is a sliding scale of how much energy it takes to move data from one transistor to another, versus how much it takes to move that data from Southern California to Washington State, for example,” said Ardis. “You very quickly get beyond a manageable power amount. Local processing is always going to be better, as long as you can do it. You are still going to need the cloud to aggregate and collect data to make macro decisions, but for decisions that you do locally, edge processing is always preferable.”

That has prompted a lot of activity recently. “The cloud is being disaggregated and moved to a location where the service providers do not have control,” said Mark Kuemerle, technical director and integrated systems architect at Marvell Semiconductor. “Security will be a key driving feature in this, and one of the cornerstones of this strategy is to be able to leverage qualified security IP. There also is an opportunity for diversification of processing, so there will be different ISAs to offload compute. We’re seeing huge interest in architectures that allow you to share common memory, even at a distance. And we’re going to see more of this with CXL and memory access beyond the rack.”


Fig. 1: Compute/server hierarchy today and how technologies will be applied to move data between and within each. Source: Semiconductor Engineering

Architectural challenges
One of the big issues will be figuring out what needs to stay in sync among data centers and how to keep that data coherent over longer distances. That involves both the structure of the data as well as the content, which can vary with mirrored servers as data is modified. Generally it will update faster at the edge than in the cloud, and those split-second variations can have an impact in time-sensitive markets such as banking or automotive.

“The cloudlet idea is subdivided into two areas,” said Panch Chandrasekaran, marketing director for the infrastructure line of business at Arm. “One is a 10 millisecond maximum latency, where that’s enough performance. The other is a 1 millisecond maximum, which may include something like a factory.”

On top of that, there are several main challenges in building these systems and making them easy to use, Chandrasekaran said. One is figuring out how to secure data across a distributed network of servers. Second is how to create a common infrastructure so that not everything has to be created from scratch every time. The third challenge is to be able to create applications while maximizing the underlying hardware, which is basically more of a hardware-software co-design type of approach.

“You have to plan this infrastructure based on the integration of several processes,” he said. “But you also can bifurcate this so that you reserve the huge compute power only for processes that need it. The penalty here is the movement of large quantities of data. So you need to balance how you load up edge resources with what gets processed in the cloud, and that means getting rid of useless data at the edge. Filtering can happen right at the source, and you don’t need a lot of compute power to do that.”

Security issues
The cloud has a solid reputation for being secure, although the number of attacks has been increasing steadily for some time. But for distributed clouds, the problem is potentially much worse.

“Once you disaggregate, you give people physical access,” said Alric Althoff, senior hardware security engineer at Tortuga Logic. “You think about a coffee shop where there are multiple devices with varying levels of security that are offloaded to some more powerful system. Well, the people who have regular access to that are responding to customer complaints. So maybe there’s augmented reality and you can’t see the menu. So you go into the manager’s office, get the key, unlock the door to the server and wiggle the connection. That works. The problem is that everyone knows where the key is.”

There also are design problems with some of these systems. “A lot of the attack vectors involve misconfigurations and simple mistakes,” said Althoff. “The threat surface is broader when they haven’t closed it down. In addition, because of all the diversity, every company has its own way of doing something. You have to get all of that into an audit chain with software, standards and compliance.”

On the flip side, the value of decentralized data is lower, making it less rewarding to hack into a cloudlet. “While it’s more accessible — it’s not Fort Knox, like some of the cloud systems — you cannot compromise the integrity of all the pieces,” said John Hallman, product manager for trust and security at OneSpin Solutions. “The smaller pieces are more vulnerable, but the opportunity to improve security is there, too. So while it may be easier to hack into the individual pieces, it may be harder to get at the individual pieces that you want to protect.”

At the very least, cloudlet providers — which will range from on-premises IT departments to off-premises data centers — will have to make some tradeoffs involving security.

“Distribution of data has the advantage of moving it closer to the end user, and reducing latency, which is critical for real-time applications,” said Gijs Willemse, senior director of Security IP at Rambus. “However, distribution increases the attack surface, which increases the risk that an adversary will find a vulnerability to exploit. Since an application may require distribution for performance reasons, the objective should be to increase the level of security by anchoring the secure solution in hardware. Securing the locally stored data, as well as the communication is essential, and applies to mini-clouds as well as data centers. The same methodologies, starting with a root of trust and going up to network protocol security, are required in both cases. If these mini-clouds are operating in a more accessible environment, the protection levels indeed will have to increase.”

So will the adoption of best practices. Hallman noted that the U.S. Dept. of Defense assumes that everything is hackable, and at some point everything will be compromised. “So you turn over different protection in shorter intervals, and while you don’t necessarily assume zero level of trust in everything, you are going to have to verify it. What you want to do is raise the level of trust in each part, but what you’re really doing is increasing verification.”

Others agree. “To have secure servers, you need to be able to trust and secure the data,” said Tom Katsioulas, head of the TrustChain Business for Siemens Digital Industries Software. “If you don’t have trusted data, there are no valuable services to offer. On top of that, all assets have to be trusted, whether those are chips or capacitors or resistors, and that means they all have to be traceable.”

Communications
Across this landscape, 5G will play a significant role. However, that role could vary greatly by region.

“In the U.S., they’re using the higher S band, which is used for things like radar,” said Kurt Shuler, vice president of marketing at Arteris IP. “In other parts of the world, they’re using much lower frequency bands, which is more useful. It could replace or augment what they already have on a cell phone. So in the United States, the use cases are largely around things like factories and automotive. Overseas, that’s much different.”

One such use case involves industrial robots, Shuler said, where microcell chipsets are used to control and monitor the activities of those robots. Most of those are fixed robots, but response time is critical.

Intel’s McCarson said 5G hype is beginning to turn into reality, particularly in industrial applications. “From a machine builder perspective, you can operate with 5G on the factory floor and eliminate the Ethernet connection,” he said. “It can cost $1,000 to drop a single Ethernet connection into a factory. With 5G, the cost is much lower. That means the cost of deploying intelligence decreases, and it’s more flexible, so you can retool quickly.”

5G also has other benefits. “In the industrial segment, any machine that emits an electrical arc will interfere with WiFi,” he said. “With 5G, you don’t have the same challenge.”

Important as it is, however, 5G is just one piece of the puzzle.

“We’re seeing huge interest in architectures to allow data centers to share memory between systems, even at a distance,” said Marvell’s Kuemerle. “We’re going to see that with CXL, where you have memory access beyond the rack. That’s a lot of connectivity required. We’re also going to see more and more optical links to get the power down, which is a big challenge in data centers. And we’re going to start to see more data centers that are coherent over distance.”

Conclusion
There are two trends that bear watching with distributed clouds. One involves growing compute capabilities at the edge and closer to the edge. More compute is shifting further left in the data flow — from end devices to edge data centers — which could have a significant impact on the usefulness of data, as well as who owns and has access to that data.

The second is a push by cloud vendors into even faster systems, fueled both by in-package optical connections and quantum computing. That could provide orders of magnitude increases in performance for very large, computationally intensive applications, further differentiating them from the edge. But depending upon where the bulk of data is being processed, it also could have implications for how the cloud business model evolves over the next decade.

Regardless of how this unfolds, the compute hierarchy is shifting again. Historically, anytime that happens — and it has with the introduction of the minicomputer, the PC, networking, and the smart phone — it has a sizable impact on the entire semiconductor supply chain. That next rev of the industry is just beginning, and the effects will be felt around the globe.

RELATED
Winners And Losers At The Edge
No company owns this market yet — and won’t for a very long time.
Edge Computing Knowledge Center
Top stories, special reports, videos, white papers, and blogs on edge computing
Challenges In Building Smarter Systems
Experts at the Table: A look at the intelligence in devices today, and how that needs to evolve.
Conflicting Demands At The Edge
Cost, power and security clash with the need for intelligence, localized processing and customization.
Revving Up For Edge Computing
New approaches to managing and processing data emerge, along with standard ways to compare them.
Memory Issues For AI Edge Chips
In-memory computing becomes critical, but which memory and at what process node?
5G Brings New Testing Challenges
Millimeter-wave and beamforming capabilities present the biggest testing challenges.



Leave a Reply


(Note: This name will be displayed publicly)