Home
OPINION

Low Power At The Edge

What’s the real impact of putting a supercomputer in your pocket?

popularity

The tech world has come to the realization in recent months that there is far too much data to process everything in the cloud. Now it is starting to come to grips with what that really means for edge and near-edge computing. There still are no rules for where or how that data will be parsed, but there is a growing recognition that some level of pre-processing will be necessary, and that in turn will require massive changes in hardware for power and performance reasons.

This has several broad implications for future chip design. First, demand for huge increases in transistor density will continue, probably at a much faster rate than doubling the number of transistors every two years or so. This increase is due to the fact that more data needs to be processed, but it comes at a time when conventional scaling at 7nm and below doesn’t provide the same power/performance improvements that it did in the past. Rather than 30% to 50% improvements in power/performance, the numbers are shrinking to less than 20%, and that’s using new materials and architectures.

One solution is less movement of the data, which speaks to both processing at the edge and processing closer to memory in those edge devices. A complementary approach is to use specialized processing elements, some of which may be at 5nm or 3nm, but others which may be of a completely different type and process geometry. In fact, some of that processing may be done using analog approaches, which is beginning to garner some attention now that digital scaling benefits have slowed. Along with that, there are different architectural approaches being explored for both digital and analog, and various packaging options to make all of this work together.

The second challenge is all about power, both in terms of the energy required to actually do computing, and power dissipation in the form of heat. Power has been the gating factor on design since 28nm, and in some cases well before that. Battery technology hasn’t improved fast enough to keep up with the increased compute requirements, which has kept the lid on clock frequencies in mobile devices over the past decade or so. Attempts to circumvent that using power management approaches such as dark silicon have proven to be a waste of silicon and energy—blocks need to be woken up and put to sleep, and some signals have to travel longer distances because those blocks aren’t necessarily close to resources such as memory and I/O.

As Arm CTO Mike Muller put it, the future appears to be “warm” silicon—somewhat on, but not completely, with less wasted area. The new challenge is to make processing itself more efficient, and that requires building a bridge between generalize computing and extremely specific computing—optimizing hardware for different data types, potentially with lower precision for some of them.

This is a workable strategy on one level. There are a slew of startups and established companies developing chips promising 100X or more performance improvements in future chips. In theory, these can be developed as tiles connected through some inter-chip interconnect fabric, which could be optical to compensate for increased distances. Or it could involve a neuromorphic architecture with smaller memories and processors scattered across a neural network.

But there’s a third challenge that also needs to be addressed, which is the business side of things. All of these approaches are technically feasible, but none of them will achieve economies of scale as defined by Moore’s Law. These are highly optimized architectures that provide extreme efficiency in computing, all of which are grossly inefficient from a business standpoint. Any solutions to these issues likely will require a radical restructuring of the supply chain, from design all the way through to manufacturing, so that device manufacturers can capitalize on what, for lack of a better term, is the equivalent of mass customization.

This is a tall order and it involves lots of changes in lots of places. But the intersection of physics and business ultimately is a mathematical equation, where the answer is a distribution. And while that distribution may be large initially, it will continue to shrink as the solutions become more mature.



Leave a Reply


(Note: This name will be displayed publicly)