Knowledge Center
Knowledge Center

Edge Computing



Edge computing or processing is a way of reducing the amount of data that needs to be processed centrally, either in data centers or commercial or private clouds.

The edge is an idea that has been kicking around in one form or another in computer science for decades. In fact, the whole client/server architecture of the late 1980s is based on a similar although much cruder model. Most data was kept centrally, but individual users could do their work locally without having to send data back and forth to the server. What’s new is the amount of data generated from sensors has exploded, and much of it now can be processed automatically using either machine learning or AI. As with earlier incarnations, there is too much data to send everything back and forth from the source of that data, so some of it must be pre-processed or fully processed, separating out useful patterns and data from irrelevant data.

This data cleansing process is essential in applications such as autonomous vehicles, where streaming video can generate an estimated 15 terabytes of data per hour. With an estimated 1 billion passenger cars on the road, that would produce an astronomical amount of data, much of it useless. Add to that all of the connected devices and these numbers quickly become unmanageable. This is where edge computing/processing becomes important.

At this point in time, however, there is no clear demarcation point for the edge. In some cases, it could be a single device or system, such as a car. In others, it may be a local or regional server. Terms range from edge clouds to fog servers, automotive clouds, industrial clouds, local clouds. In general, though, the idea is to limit the amount of data that needs to be transmitted because moving large volumes of data is expensive, time-consuming and inefficient.


Edge Inferencing Challenges


MCU Memory Options