Terminology Beyond von Neumann

Neuromorphic computing and neural networks are not the same thing.


Neural networks. Neuromorphic computing. Non-von Neumann architectures. As I’ve been researching my series on neuromorphic computing, I’ve encountered a lot of new terminology. It hasn’t always been easy to figure out exactly what’s being discussed. This explainer attempts to both clarify the terms used in my own articles and to help others sort through the rapidly growing literature in the field.

The first challenge is to differentiate between hardware and software. For example, a neural network might be described as consisting of several layers of interconnected nodes. The first layer might apply appropriate weights to each pixel of an image, passing the results to the next layer. Connected nodes might combine to produce a weighted sum of their individual contributions.

Neural networks like this can accomplish useful tasks, but they don’t necessarily have anything to do with neuromorphic computing. All of the steps described can be accomplished by completely standard hardware performing routine matrix multiplications and sums. As the size and complexity of the data increases, the matrices get larger, but the basic concept can be implemented with very minimal resources.

Most computers available today are based on the so-called von Neumann architecture. In this model, there is a strict separation between the computation unit and memory. For each operation, the computation unit loads both data and processing instructions into registers. The result is either stored in another register for the next operation, or saved back to memory until needed again. A modern microprocessor might have multiple computation units, 64-bit registers, and vast amounts of silicon devoted to highly specialized instructions, but the fundamental model hasn’t changed since the early days of digital computing.

There lies the issue. No matter how fast the computation unit or how vast the memory array, calculations are ultimately limited by the bandwidth of the bus connecting the two. When, as in neural networks, the calculations involve simple operations on large data sets, the time needed to read and write the computation registers can dwarf actual computation time. Non-von Neumann architectures seek to find a way around this bottleneck. What if there were a way to combine computation and data storage in a single circuit block? A data signal might be applied to an array of gates, which transmits it through a chain of predetermined logic operations. (As in an FPGA.) Or, what if instead of summing and multiplying matrices, memory elements that were activated more frequently became stronger until they reached a tipping point?

Both of these examples offer alternatives to the traditional model. Readers of part one of this series may recognize the resemblance between the second example and biological brains.

Biological brains have no memory elements and no central computation unit. They are highly interconnected assemblies of neurons and synapses, communicating by pulses of ion current mediated by neurotransmitters. Though some sophisticated capabilities, like facial recognition, appear to be innate, human brains are largely self-programming, continuing to build associations between experiences over the course of the individual’s life. Biological brains also forget: connections can decay if not reinforced. There is a degree of randomness in biological brains, too, that neuroscientists believe may be necessary for creativity and problem solving.

Neuromorphic computing seeks not only to replicate the results that biological brains can achieve, but to do so using analogous mechanisms. Leakage and random noise are detrimental in traditional architectures, but may be essential here. As the next article in the series will discuss, the inconsistent performance of phase change and resistive memories has made them leading candidates for use in artificial synapses and neurons.


Related Stories
Neuromorphic Computing: Modeling The Brain
Competing models vie to show how the brain works, but none is perfect.
Speeding Up Neural Networks
Adding more dimensions creates more data, all of which needs to be processed using new architectural approaches.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Using CNNs To Speed Up Systems
Just relying on faster processor clock speeds isn’t sufficient for vision processing, and the idea is spreading to other markets.
Decoding The Brain
How will decoding the brain of a fruit fly help us build better computers? Lou Scheffer says we have a long way to go before electronics catches up with image recognition.