The Evolution Of Pervasive Computing

IoT, edge and cloud are becoming demarcations for where to process data.

popularity

The computing world has gone full circle toward pervasive computing. In fact, it has done so more than once, which from the outside may look like a more rapid spin cycle than a real change of direction. Dig deeper, though, and it’s apparent that some fundamental changes are at work.

This genesis of pervasive computing dates back to the introduction of the PC in 1981, prior to which all corporate data was kept in a closely guarded central database. The PC blew open the doors on that data, paving the way toward a client/server model later in the decade, and it has been progressing ever since. Computing has been both centralized and decentralized at various times, depending upon the cost of storage and access as well as security issues, but the overall trend has been toward democratization of the mass of data.

There also is more data to consume. According to Cisco, total Internet traffic will reach 4.8 zettabytes per year in 2022, up from 1.5 zettabytes in 2017. Most of that data is useless, and not all of it is in the same format. In fact, by 2025, an estimated 80% of the data in the world will be unstructured, according to an IDC report. Yet this data never goes away completely, meaning it is stored somewhere, even if it takes longer to retrieve.

For the semiconductor industry, this is good news. All of this data will drive sales of processors, accelerators, memory, interconnects, IP, and manufacturing and test equipment, as well as related services. More data needs to be processed, and it needs to happen more quickly than in the past, which is where the idea of the “edge” fits in. Data needs to be cleaned up much closer to the source and refined further at various levels in the compute hierarchy, and this is where the biggest change is occurring.

The edge is rather vague and vast at this point in time. It has been roughly defined as anything between the sensor and the cloud. An edge device could be a car, a server in an on-premises data center, or it could be a smart phone or smart speaker. It’s highly likely we will see more striations of this market as it begins to take shape and companies figure out how to monetize it, but from a very high level this is basically still a client/server distributed processing model.

What makes it different from the past is the addition of localized intelligence in what gets processed where. This is basically smart partitioning, and it will have a big effect on both what gets passed along to the cloud, as well as the value of that data from a security standpoint. It doesn’t make sense to just keep piling up data when the tools exist to clean up and organize that data much earlier in the compute/storage cycle. So in effect computing will still be distributed, but it won’t be randomly distributed or unstructured.

Unlike the initial IoT concept, where all data would be sent to the cloud through a gateway, this new compute model sends only some of that data to the cloud. The rest can be processed close to the where it was generated, and further sorted somewhere between the endpoint and the cloud. Everything is still connected, but the processing is partitioned much more intelligently and dynamically.

Rather than just pervasive computing, we are entering an era of intelligent computing. This marks a fundamental change in computer science, and it will have a big impact on how chips and systems will be designed for the foreseeable future.

Related Stories
Data Confusion At The Edge
Disparities in processors and data types will have an unpredictable impact on AI systems.
HW/SW Design At The Intelligent Edge
Systems are extremely specific and power-constrained, which makes design extremely complex.
Where Is The Edge?
what the edge will look like, how that fits in with the cloud, what the requirements are both for processing and for storage, and how this concept will evolve.
Machine Learning Inferencing At The Edge
How designing ML chips differs from other types of processors.
Optimizing Power For Learning At The Edge
Making learning on the edge work requires a significant reduction in power, which means rethinking all parts of the flow.
Edge Knowledge Center
Memory Subsystems In Edge Inferencing Chips
Tradeoffs and their impact on power, heat, performance and area.



Leave a Reply


(Note: This name will be displayed publicly)