Von Neumann Upset

Just because an invention is no longer practical for many applications does not mean it wasn’t a good invention at the time.

popularity

My recent article about the von Neumann architecture received some quite passionate responses, including one that thought I was attempting to slight the person. That was most certainly not the intent, given that the invention enabled a period of very rapid advancement in computers and technology in general.

The process of invention and engineering are both quite similar and yet different. Invention often has very few constraints, but what is expected is something radically different from the past. Engineering takes those inventions and refines them as additional constraints are placed on them. At some point, the weight of those constraints can become crushing and a new invention is required.

The world changes, and even von Neumann himself would probably be surprised that the architecture persisted as long as it did. Generality comes at great expense in some areas and provides what one hopes are equal if not greater benefits in other areas.

Generality is wasteful if you are not using that generality. The von Neumann architecture assumes that a program knows nothing about the past, or the future, except for the data held within memory. This leads to what appears to be random access to memory. That in turn led the path of memory development. There are a few small exceptions, such as NAND flash which does not naturally enable random access and needs a shadow RAM for some operations.

The instruction stream in a von Neumann architecture is more suited to control operations. The cost involved in time and energy to fetch those instructions is high compared to the time and power consumed in performing an operation. Over time, that inequality has increased. We are transforming from a control-centric computing era to a data-driven era. One instruction per operation doesn’t cut it anymore. Even custom ISAs that can group multiple instructions together are still wasteful. For applications where the temporal nature of access is known ahead of time, we can, and must, do much better.

The biggest benefit came from the productivity on the software side due to the regularity and simplicity of the architecture. Single-threaded code is easy to write and reusable — even across processor variants. It enables libraries and other higher-level functions to be created. We are still running software written in Cobol back in the ’60s.

But the world is becoming more power conscious as climate change begins to threaten our way of life. Global warming, rising seas, and more unpredictable weather are impacting everyone, are costing the world in both lives and in material ways. We cannot afford to waste energy in the way that we have in the past, and technologies like AI/ML, while providing some benefits, are right now probably costing the environment more than the benefits it is providing. That has to change. Running AI/ML on a von Neumann architecture is not only slow, but wasteful. Thankfully, the industry agrees that this is not the right way forward.

Change takes time and the inertia of the system means adding risk as you move away from the ways of the past. We need new compute architectures, and we need new memory architectures and even new memories.

Perhaps the biggest change is that we need to start teaching a new generation of software engineers who are not constrained by the notions of single-threaded execution, by the notion of a single, contiguous, almost limitless amount of memory, and who accept that what they do consumes power and that waste is expensive. Today, indirectly, software engineers are responsible for about 10% of worldwide power consumption, and that number is rapidly rising. It has to stop.



Leave a Reply


(Note: This name will be displayed publicly)