Neuromorphic Computing: Modeling The Brain

Competing models vie to show how the brain works, but none is perfect.


Can you tell the difference between a pedestrian and a bicycle? How about between a skunk and a black and white cat? Or between your neighbor’s dog and a colt or fawn? Of course you can, and you probably can do that without much conscious thought. Humans are very good at interpreting the world around them, both visually and through other sensory input. 

Computers are not. Though their sheer calculation speed surpassed that of human “calculators” long ago, large data centers equipped with terabyte-scale databases are only beginning to match the image recognition capabilities of an average human child. 

Meanwhile, humans are creating larger and more complex digital archives and asking more complex questions about them. How do you find the photo you want in a collection of thousands? How does a music service answer a customer request for “more like this?” How can computers support technical decision making when the source data is often noisy and ambiguous? 

Neuromorphic computing seeks to build systems informed by the architecture of biological brains. Such systems have the potential to analyze data sets more rapidly, more accurately, and with fewer computing resources than conventional analysis. 

In the current state of the art, people who discuss neuromorphic computing and big data analysis are usually talking about neural networks. While current-generation neural networks are important for practical problem solving and will be discussed in a future article, they don’t really have much resemblance to biological brains.

Fig. 1: Neuron cell diagram. Source: Wikimedia Commons.

How neurons work
The first important difference is the sheer scale of connectivity in biological brains. The nucleus of a nerve cell is at the center of a web of fibers, or axons, each of which branches into potentially thousands of dendrites. Each dendrite can connect to a neighboring neuron across a junction known as a synapse. Though electronic analogues often define this web of connections as fixed, it is not. Synaptic connections are made and broken constantly. As Jeff Hawkins, co-founder of Numenta, explained in a talk at the 2015 IEEE Electron Device Meeting, “[Biological] memory is a wiring problem, not a storage problem,” and a large wiring problem at that.

In the human neocortex—responsible for functions like sensory perception, spatial reasoning, and language—there are millions of neurons, each of which may communicate with thousands of neighbors. The neocortex alone has billions of connections. The brain as a whole has trillions. For comparison, the largest server-based neural networks have about 11 billion connections.

Furthermore, the brain is an analog system. Transistors in electronic circuits are either on or off. Memory elements store either 1 or 0. Synaptic connections are not directly equivalent to memory capacitors, but they can be strong or weak, and can be reinforced or depressed in response to stimuli.

More precisely, neurons communicate through electrical currents resulting from the flow of sodium and potassium ions. There are differences in ion concentrations between intracellular and extracellular fluids. When a pre-synaptic neuron releases a neurotransmitter compound, the ion channels in the post-synaptic neuron are either excited or depressed, increasing or decreasing the flow of ions between the cell and the extracellular fluid. Doo Seok Jeong, senior scientist at the Korea Institute of Science and Technology, explained that the cell membrane of the post-synaptic neuron acts as a capacitor. Ions accumulate until a critical threshold is reached, at which point a “synaptic current” spike propagates along the neural fibers to other synapses and other neurons.

The capacitor will charge and discharge repeatedly until the neurotransmitter concentration dissipates, so the synaptic current actually consists of a chain of related spikes. The length of the chain and the frequency of individual spikes depends on the original stimulus. The response of a particular neuron to a particular synaptic current chain is generally not linear. The relationship between the input and output signals is the “gain” of the neuron. 

It must be emphasized, though, that the relationship between external stimuli and synaptic current is not clear. Biological brains produce chains of synaptic current spikes that appear to encode information. But it is not possible to draw a line between the image of “cat” received by the photoreceptors in the retina and a specific pattern of synaptic spikes generated by the visual cortex, much less the positive and negative associations with “catness” that the image might produce elsewhere in the brain. A number of factors, such as the non-uniform cell membrane potential, introduce “noise” into the signal and cause the loss of some information. However, the brain clearly has mechanisms for extracting critical information from noisy data, for discarding irrelevant stimuli, and for accommodating noise-induced data loss. The biological basis for these mechanisms is not known at this time. 

Firing synaptic current spikes
In modeling the brain, at least two levels need to be considered. The first is the biological mechanism by which chains of synaptic current are generated and propagated. The second is the role of these spikes in memory and learning. Both levels face a tradeoff between biological accuracy and computational efficiency. For example, many commercial neural networks use a “leaky integrate and fire” (LIF) model to describe the propagation of synaptic spikes. Each neuron has a pre-determined threshold, and will “fire” a synaptic signal to its neighbors when that threshold is exceeded. In electronic networks, similarly, each neuron applies pre-determined weights to input signals to determine the output signal. Rapid determination of the appropriate weights for a particular problem is one of the central challenges of neural network design, but once the weights are known, the output signal is simply the dot product of the input signal with the weight matrix.

This approach is computationally efficient, but not biologically realistic. Among other things, the LIF model ignores the timing of synaptic spikes, and therefore the causal relationship between them. That is, signal “A” may precede or follow signal “B” and the response of biological neurons will depend on both the relative strength and the relative timing of the two signals. A strict LIF model will only recognize whether the combination of the two exceeded the node’s threshold. The biological behavior is analog in nature, while the electronic behavior of conventional neural networks is not. 

Two alternatives to the LIF model incorporate additional biophysical pathways, increasing biological realism at the expense of computational efficiency. The spiking neuron model takes into account the cell membrane’s recovery rate—how quickly the membrane potential returns to its nominal value. This model can describe different kinds of neurons, but it preserves computational efficiency by only considering variations in the membrane potential.

A much more sophisticated alternative, the Hodgkin-Huxley model, considers several different biophysical contributions, including membrane potential and the sodium and potassium ion currents. It establishes the dependence between the conductance of ion channels and the membrane potential. Further extensions of the original Hodgkin-Huxley model recognize several different potassium and sodium currents and incorporate neurotransmitters and their receptors. The HH model is substantially more realistic, but also much more computationally complex. 

These three models describe the fundamental mechanisms of synaptic current generation and propagation in increasing levels of detail. For the processes we call “thinking,” — memory, learning, analysis — an additional step is required. In biological brains, this is synaptic plasticity, which is the ability of the brain to strengthen and weaken, break and remake synaptic connections. The chains of synaptic spikes provide the input for learning rules, the next level in brain modeling.

From current to data: synaptic plasticity
One of the most basic learning rules—proposed in 1982 by Brown University researchers Elie Bienenstock, Leon Cooper, and Paul Munro (BCM)—expresses synaptic change as a product of the pre-synaptic activity and a nonlinear function of post-synaptic activity. It is expressed in terms of firing rates, and cannot predict timing dependent modification of synapses. 

A somewhat more sophisticated model, spike timing-dependent plasticity, recognizes that the relative timing of two signals also matters. Is a positive or negative experience associated with a particular stimulus? How closely? These details affect the relative strengths of synaptic connections. The most basic STDP models compare the timing of pairs of spikes. If the pre-synaptic spike comes before the post-synaptic spike, the connection is enhanced. Otherwise, it is weakened. However, the basic STDP model does not reproduce experimental data as well as the BCM model does. 

One proposed modification, a triplet-based STDP learning rule, compares groups of three spikes, rather than pairs. It behaves as a generalized BCM rule in that post-synaptic neurons respond to both input spiking patterns and correlations between input and output spikes. These higher order correlations are ubiquitous in natural stimuli, so it’s not surprising that the triplet rule reproduces experimental data more accurately.  

Which of these models and learning rules is the “best” choice largely depends on the situation. Neuroscientists seek to develop models that can accurately reproduce the behavior of biological brains, hoping to gain insight into the biological mechanisms behind human psychology. Neuromorphic computing seeks to use biological mechanisms to inform the architecture of electronic systems, ultimately deriving improved solutions to practical data analysis problems. Reproduction of a specific chain of synaptic spikes or a specific learning behavior is secondary to accuracy and computational efficiency.

Part two of this series will show why the “best” neural networks are not necessarily the ones with the most “brain-like” behavior. 

Related Stories
What’s Next For Transistors
New FETs, qubits, neuromorphic approaches, and advanced packaging.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Inside Neuromorphic Computing (Jan 2016)
General Vision’s chief executive talks about why there is such renewed interest in this technology and how it will be used in the future.
Neuromorphic Chip Biz Heats Up (Jan 2016)
Old concept gets new attention as device scaling becomes more difficult.

  • John Alley

    Outstanding article. Was particularly intrigued with the HH model. I have long thought that memristors will be used in neuromorphic hardware for many purposes, but most fundamentally to allow the ready representation of real (number) analog thresholding.

    I am increasingly interested in the application of memristors to neuromorphics. Pointers would be appreciated.

    • kderbyshire

      Keep reading. Future articles in the series will look at some of the proposed hardware implementations of neuromorphic systems, and memristors are most definitely on the list.

  • Katherine, Enjoyed the article. The human brain is stunningly complex to the point that I cannot imagine it self-assembling based on random chance, the complexity appears to be designed.

    • kderbyshire

      An interesting question, but one that falls outside the scope of this article, or this site.

  • memister

    Great article. Just wondering though, in a big data environment, where the data are NOT pre-correlated (as in an image), is something neuromorphic most suitable? It seems could waste a lot of synaptic connections.

    • kderbyshire

      Since neuromorphic architectures are in their infancy, I’m not sure anyone yet knows exactly which applications will be the best fit. And “self-learning” neural networks that can discover patterns on their own are even less mature than “trained” networks. This is all an active area of research that I’ll look at in future articles in the series.

      One aspect of biological brains is that the number of potential synaptic connections is so huge that it doesn’t really matter whether they are used “efficiently.” Discovering connections in uncorrelated data is one definition of “creativity.” But neuromorphic computing has a long way to go to meet that standard.