Rethinking Computing Fundamentals

Memory and processors are still separate, but that could change as the volume of data increases.

popularity

New compute architectures—not just new chips—are becoming a common theme in Silicon Valley these days. The whole semiconductor industry is racing to find the fastest, cheapest, lowest-power approach to processing.

The drivers of this shift are well documented. Moore’s Law is slowing down, in part because it’s becoming more difficult to route signals across an SoC at the latest process nodes. There is RC delay to contend with, along with a number of physical effects such as thermal migration, electromigration, and dynamic current density. It’s also getting more expensive, partly because it takes a lot of time to engineer around those electrical and thermal issues, and partly because it’s becoming harder to manufacture these chips. The lithography alone has taken the better part of five process nodes to reach a point where it is commercially viable.

Alongside of these chip-related issues, the volume of data is exploding. Tens of billions of things are sending data to other things. Over the next decade, there will likely be trillions of things. There are sensors everywhere, and baselines are being established to map every change of every sort, from road conditions to air quality to sugar levels in a person’s bloodstream. Some of this data is useful, most of it is not, but it all needs to be processed and sifted to figure out which is which.

This requires massive compute power, of course. But it also requires a rethinking of basic compute architectures. In the early days of computing—when computing was done by people rather than machines—military efficiency experts looked at the flow of information about war materiel and recognized that if they arranged desks of these human “computers” in a certain way they could add significant efficiency into the process.

A similar thing is happening in the age of sensor-driven vehicles, the IoT/IIoT, and giant cloud operations, only this time it’s machinery that needs to be arranged rather than people. As the amount of data increases, it makes more sense to move the processors than the data.

Rick Gottscho, chief technology officer at Lam Research, alluded to these changes in a presentation to analysts this week. “The industry will have to merge logic with memory,” he said. “The memory device essentially becomes an analog processor with very rapid data transfer. The power drops and the speed increases.”

He said that moving this from volatile to various types of non-volatile memory now under development will add orders of magnitude improvements in performance and power. This is an interesting idea with a number of possible ramifications, including how analog data is processed and the best ways to do that.

There are other changes afoot, as well. Pre-processing data at the edge using accelerators and various different types of processors, such as FPGAs, GPUs and DSPs, ultimately could mean less data that needs to be processed by one or more CPU chips or cores.

Put in perspective, every technology and approach is being re-examined and rethought. But the big change is that none of this is happening in isolation. More efficient DRAM and SRAM, as well as advances in silicon photonics and more edge compute capabilities, could provide huge improvements in power, performance and throughput. And if the pieces are weighted differently and rearranged to match data flow for a specific application, the parts can be used much more efficiently.

This applies to a variety of new packaging options, as well. If two chips can be placed side by side with a high-speed interconnect, the distances that signals have to travel can be shortened and the signal path can be widened. That cuts down on many of the physical effects that plague single-chip scaling, but it also creates a platform for what ultimately could be mass customization.

All of these approaches will likely be on the table over the next few years. It’s possible all of them will be used for different applications, or that the market will weed out a bunch of them in favor of a few. But one constant here is that change is coming in a very big way, and power and performance will be the key variables.



Leave a Reply


(Note: This name will be displayed publicly)