Move Data Or Process In Place? (Part 2)

Second of two parts: Costs associated with moving data escalate when we move from chips to systems. Additional social and technical factors drive different architectures.

popularity

Chip architectures, and even local system architectures, long have found that the best way to improve total system performance and power consumption is to move memory as close to processors as possible. This has led to cache architectures and memories that are tuned for those architectures, as discussed in part 1 of this article. But there are several tacit assumptions made in these architectures that do not hold true when we start looking at larger systems. Discussions about the IoT and cloud computing have brought many of these to the forefront. The industry is still evolving, and it is not clear that the perfect solution has been found yet.

Edge or cloud processing?
We have seen the industry deal with this issue several times historically.

“In the 1980s, this discussion revolved around the mainframe versus personal computer,” recalls Tom Wong, director of marketing for design IP at Cadence. “How much of the processing should be done locally versus accessing the mainframe from far-flung places? In the late ’80s to early ’90s, the same discussions occurred with the advent of the client-server model. This was the age of mini-computers, super-minis, and workstations versus mainframe computing. Fast forward to the 21st century, where we now have low-power, high-performance embedded processors and DSP chips that provide incredible amounts of compute resources. We witnessed the advent of the smartphone, the evolution of the ubiquitous applications processors, SoCs for IoT, and the paradigm of edge computing versus cloud computing.”

The advent of machine learning changed things significantly. Rather than basing outcomes on local observation and computation, group data and analysis promised things that could not happen without large amounts of processing — far larger than would be available on the edge. The industry also showed how cloud computing and shared, centralized services would be more efficient compared to each company creating and maintaining their own data centers.

“Nowadays, it’s easy to think that anything important in technology only happens in the cloud,” says Wong. “That is where all the excitement, investment and discussion are happening. However, processing data locally isn’t new. This is how things were done before the internet and the ubiquitous network connectivity we take for granted today in the industrialized world.”

Deciding where to process and when to move data is a system partitioning problem. “The two biggest power hogs associated with moving data are the radio and internal SRAM accesses,” points out Jim Bruister, director of digital systems for Silvaco. “The basic challenge is how to increase processing and SRAM power efficiency and throughput while lowering access power. Better SoC (CPU/SRAM/DSP and interconnect) architectures, chip process technology, and data compression will reduce overall IoT edge power while providing more capability to do things such as spectral analysis that requires collecting a lot of data samples.”

What appears to have changed industry thinking is video—lots of it, with increasing amounts of that being high-definition. “The amount of data being generated by leaf nodes and gateways is untenable,” says Warren Kurisu, director of product management for embedded systems division of Mentor, a Siemens Business. “That is being addressed by the industry by moving more of the cloud functionality closer to the edge. It enables you to have solutions that sit much closer to your data.”

Several organizations are working to create solutions for this. Examples include OpenFOG and the Linux Foundation, which has a similar project called EdgeX Foundry.


Fig. 1: OpenFog’s view of the computing infrastructure.


Fig. 2: EdgeX Foundry’s approach.

But this is not just a power-related problem. Many IoT deployments face challenges related to latency, network bandwidth, reliability and security, which cannot be addressed in cloud-only models. Fog computing adds a hierarchy of elements between the cloud and endpoint devices, and between devices and gateways, to meet these challenges in a high-performance, open and interoperable way.

Hierarchical systems

Simple hierarchies include gateway devices. “Edge computing is a matter of meshing the local and the cloud appropriately through a gateway and distributing data storage and analytics in an optimal way,” says Cadence’s Wong. “This has to be done while ensuring the result is seamless to the end user. If data transmission were free and took zero time to transmit and receive from the cloud, then it would make sense to process everything centrally, in a giant server farm. It’s inexpensive, easy to scale, and ensures that data is always available for deriving all kinds of analytics.”

But data transmission costs rise with data volume and distance, and transmit-to-receive latency can become significant for time-critical operations. The importance of edge processing grows as that edge gets farther away and harder to access. Systems also have to be able to continue functioning even when the cloud is not available.

“It doesn’t make much sense for raw data from billions of IoT endpoints/sensors to be transported to the cloud data centers for processing as it will add network latencies especially for time-sensitive applications,” says , CEO for Mobiveil. “By processing it locally at the edge, we could use the power of distributed computing as well as remove network congestion and improve operational efficiencies. The processed data however needs to be transported to the cloud for storage, further processing and centralized analytics. I envision the edge devices utilizing embedded SoCs for better intelligent data processing. 4G and 5G network topologies already envisioned this and are making provisions to have compute servers co-located with wireless data centers to reduce network congestion.”

But this raises another question. Who owns the data in the cloud? At the Design Automation Conference, Joe Costello, CEO for Enlightened, addressed this question in a speech. “The key in all [IoT systems] is to pick the right thing, make sure you are gathering the data, intelligent data and processing it – but the data is the critical thing. The goal is the data. And you have to protect it like crazy because that is where the real value is. The flipside is it is not enough to just have data. You have to do something with it.”

For many systems, data will remain proprietary. But for others, users may not get a choice, and in some cases that data could have negative implications.

Some edge nodes are a lot larger and more sophisticated than others. “The reality is that there will not be one monolithic architecture for every IoT device,” says Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. “It depends on the application and where the devices are going. You can argue that a car is an IoT edge device, and that needs a massive amount of intelligence to process and understand the data. In addition, understanding patterns across these devices will require processing within the cloud. Some intelligence will be in the devices, but collating intelligence across devices and making sense of it has to happen in a more central place.”

How does such a system get optimized when the pieces are owned by different companies? “That is probably what Masayoshi Son saw with Softbank and why they bought Arm,” suggests Ty Garibay, CTO for ArterisIP. “They want to figure out how to map from edge devices to servers and own everything in between so that it can be optimized. Apple and Google are perhaps the only other ones that could do this. Partnering with other people in the middle of your ecosystem is tough. You have to own the whole path and that sets these folks apart. It makes it possible for them to come up with a grand unified theory of how to communicate and optimize at each level and that can improve the end points and then you can assume things about the architecture on top of them. It suffers because it is not very interoperable, but they probably don’t care. Each group may try and build their own and see who wins.”

Autonomous vehicles
For autonomous vehicles to become possible, the sophistication of local processing has to increase further. “Probably the most visible manifestation of advanced-edge computing and analytics is the connected autonomous car,” says Wong. “With enormous amounts of sensor data, enormous local processing power enabled by the modern SoC/DSP, and a need to connect to more advanced data analysis tools in the cloud, autonomous cars are seen as the poster child of advanced-edge computing.”

There are multiple layers to this. “Each vehicle needs to make instant-by-instant (real-time) decisions that affect human safety for applications such as adaptive cruise control, collision avoidance systems, lane departure monitoring and mitigation, automatic emergency braking, etc.,” continues Wong. “Then there are other more mundane decisions or reporting of their engines and other systems statistics. At the same time they are also providing data needed to support V2V and V2H systems with the appropriate amount and granularity of data.”

However, a car needs to be able to continue operating without a cell connection and cannot assume that other vehicles are similarly equipped. “Consider the cameras within cars,” says Drew Wingard, CTO at Sonics. “If an average car has 10 cameras plus radar and LiDAR, and you were going to try and send all of the data up to the cloud, you will need a multi-GB/S connection permanently. Secondly there are latency issues. If there is a real-time responsiveness requirement, you cannot afford the latency of transmitting to the cloud and arbitrating access to some cloud resource to make a choice which gets sent back. Availability is an issue. You don’t want the self-driving car to stop just because it can’t get a radio connection. There are safety aspects to that.”

For some functions, the car can itself be considered a closed system with its own edge nodes and cloud. “If there is a camera on the windshield taking in an HD image, that requires a couple mega pixels per second,” points out Michael Thompson, senior manager of product marketing for the DesignWare ARC Processors at Synopsys. “There is no good way to get that data off the windshield. You could route the cable off to the side and into the engine compartment but the power consumptions gets to be fairly large so you place the processor right on the camera.”

Conclusion
Edge computing has the advantages of minimizing transmission costs and response times. But these advantages aren’t free. Remote applications can malfunction in ways that are hard to analyze and debug. Maintaining and upgrading distributed computing hardware can be complicated and costly. In addition, balancing loads between edge and center will always be tricky.

“Distributed processing means moving more complex devices out into the world, sometimes into harsh and difficult environments,” concludes Wong. “Edge processors aren’t sitting in a rack in a climate-controlled data center. There is significant cost and complexity in ruggedizing and maintaining these processors in such a harsh environment. The farther the edge is from the cloud and the more critical its decisions, the smarter it needs to be.”

Related Stories
Move Data Or Process In Place?
Part 1: Moving data is expensive, but decisions about when and how to move data are very context-dependent. System and chip architectures are heading in opposite directions.
How Cache Coherency Impacts Power, Performance
Part 1: A look at the impact of communication across multiple processors on an SoC and how to to make that more efficient.
How Cache Coherency Impacts Power, Performance
Second of two parts: Why and how to add cache coherency to an existing design
CCIX Enables Machine Learning
The mundane aspects of a system can make or break a solution, and interfaces often define what is possible.



Leave a Reply


(Note: This name will be displayed publicly)