Into The Cold And Darkness

What comes after faster hardware and software?

popularity

The need for speed is limitless. There is far more data to process, and there is competition on a global scale to process it fastest and most efficiently. But how to achieve future revs of improvements will begin to look very different from the past.

For one thing, the new criteria for that speed are frequently tied to a fixed or shrinking power budget. This is why many benchmarks these days focus on power, which can be performance per watt, picojoules per bit, or some version of add/subtract/multiplication per compute cycle. There is widespread debate about which is the best benchmark for which task, but in general the strategy is to boost throughput between processors and memory, to do as much processing locally so less has to be sent to memory in the first place, and to improve the efficiency of software and hardware.

Hardware-software co-design is an idea that looked promising back in the early 1990s, but due to a variety of factors ranging from savings in commodity servers and how budgets were carved up inside of data centers, it fizzled before it really got started. Over the past few years it has come roaring back to life as AI and machine learning begin to take root almost everywhere. Like most other improvements, it will provide one or two more generations of improvements. So will new heterogeneous architectures and packaging approaches. And continued device scaling, despite all the naysayers, will provide more room for massive parallel processing on a single die with multiple types of accelerators, memories and far more efficient computing in all respects.

But sooner or later, all of these improvements will run their course, and at that point the fundamental nature of big data processing will begin to change. It’s difficult to assess just how fast computers will be by then, or how much computing they will be able to do per watt. Most estimates are multiple orders of magnitude improvement in performance, give or take a couple zeroes.

The question is what comes next, and the answers likely will be found in the quantum world. In the past, this was considered theoretical physics. Quantum computing is one approach, although its commercial success will depend on the ability to develop masses of qubits that are stable enough to last long enough to do serious work with consistency and uniform quality.

But there’s more that can be done at extremely cold temperatures, where interactions between various subatomic particles can be free of noise and many thermal considerations. Materials behave completely differently at these temperatures, and even standard DRAM begins to look more like a superconductor than a typical piece of silicon.

This certainly isn’t something you’ll be carrying around in your pocket. In fact, you can’t buy liquid nitrogen without a good reason, and you certainly can’t work in this environment. But in lights-out data centers, heavily insulated from the rest of the world, some amazing feats of speed are expected to become possible.

This will require massive bandwidth and throughput to be able to utilize this compute power from the outside, and it will require significant improvements in infrastructure and communications technology. But for anyone who assumed that performance scaling is a thing of the past, we’ve barely scratched the surface.



1 comments

Dr. Randall Kirschman says:

I’ve been encouraging electronics engineers to consider low-temper operation—where it makes sense. In the design process temperature can be considered another design parameter for enhanced system performance.

Leave a Reply


(Note: This name will be displayed publicly)