Should Processing Take Place At End Nodes?

There are multiple considerations when it comes to deciding where data should be processed in connected systems.

popularity

Last week at ARM TechCon — which I found extremely interesting for the deep technical content — there was much discussion around where processing should happen in our connected world. (I’m really trying to stay away from the nebulous term, ‘IoT.’)

Some believe the processing should happen at the edge nodes, while others believe it should all take place in the data center; I’ve even heard concerns raised about how to spread the processing out at different points along the node.

What I find fascinating is the incredible power of technologies like vision processors that have vast and untapped application opportunities when coupled with deep learning algorithms. When you stop and think about the connected nature of machines today, it boggles the mind to imagine all of the ways these technologies can, and will be applied.

ARM announced last week a number of new cores that allow for big processing at the end nodes, as well as a cloud service, as part of its IoT push. Intel has been making big strides as well, which I learned about recently when I spoke with Ken Caviasca, vice president in the IoT Group and general manager of platform engineering and development at Intel. As far as where to process, it does seem that Intel and ARM are in agreement.

Case in point, Intel’s new Atom 3900 aims to enable many of the processing needs to take place at or near the data sensor, to avoid pushing all processing to the data center, Caviasca said.

From a designer’s point of view — and frankly, from a big picture energy and power point of view, this makes sense given the cost difference in processing in datacenters versus at the end node. I don’t have the figures in front of me, but I will work on getting that.

Intel said it is going to tailor this Atom specifically for automotive-grade, in-vehicle experiences, with more details to come next year. Given the more stringent requirements of automotive, it would make sense to have a specific processor just aimed at that market.

As far as processing location goes, Caviasca said in looking at traffic cams, and sensor data, for example, there are downsides to sending data to a server for analysis, such as loss due to video compression and time spent in travel, versus having the ability to process data at the device. And in an automotive software-defined cockpit, edge computing capability will make a difference with the ability for a single system to drive the digital gauges, navigation, and advance driver assist functions. Also, backup sensors, bird’s-eye view parking or side collision alter must function in a reliable response time, regardless of what the media or navigation system is doing at that time.

Of course, ARM and Intel are just a few of the many players with emerging technologies in vision processing for different application areas. As such, the debate for where to processing is not over yet.



Leave a Reply


(Note: This name will be displayed publicly)