Systems & Design
SPONSOR BLOG

Disaggregation Of The SoC

Edge computing will change the basic structure of systems.

popularity

The rise of edge computing could do to the cloud what the PC did to the minicomputer and the mainframe.

In the end, all of those co-existed (despite the fact that the minicomputer morphed into commodity servers from companies like Dell and HP). What’s different this time around is that the computing done inside of those boxes is moving. It is being distributed in ways never considered feasible previously, and the impact on system-level design will be profound.

The closest the semiconductor industry has gotten to distributed computing in the past was multiple cores and accelerators in an SoC. But with more data being generated by more devices everywhere, the cost of processing that data in a central location — as measured in terms of energy, bandwidth, memory, processing and storage — is going up. The solution is to move that processing much closer to the source of the data, and that could well be on the periphery of boxes, devices or buildings.

The traditional computer, which has been around in various sizes and shapes since the creation of ENIAC in World War II, is being broken up into lots of little pieces. In many ways, this is largely due to the capabilities of the technology that has been developed since ENIAC, but the traditional way of achieving those benefits has run out of steam. Transistors still scale, but performance and power improvements are scant. The big benefit is area, and that area can be better leveraged doing other things rather than more of the same.

This has broad implications for designing chips at a system level. If data can be distributed across multiple processors closer to the source, it can dramatically reduce the volume of data that needs to be processed centrally. That provides the power and performance improvements that used to come from increasing transistor density. It saves on the time needed to send that data back and forth, requires less energy to move the data, and it begins to move processing to a more neuromorphic model. In the human body, for example, the central nervous system can react to pain well before a signal reaches the brain.

The net result will be far more processors than before. And they all will need to work in the context of at least a system, and more likely a system of systems. But the big change is that they will be doing more work outside of the traditional places where processing is done. In effect, the work is being distributed across billions of processors closer to data sources, and ultimately amassed more centrally, where broad analysis can be done with less-critical response time.

This sounds straightforward enough as a progression of computing, but it will completely remake the chip industry. Performance, power and cost always have been the three critical elements in design, and that won’t change. But how best to optimize those three elements has changed significantly. The changeover has begun as new markets such as automotive, AI, 5G and augmented/virtual reality begin reshaping how and where computing is done.

For the old way of doing things, the end is near. For those able to embrace this change, the edge is even closer.



Leave a Reply


(Note: This name will be displayed publicly)