Toward Neuromorphic Designs

From synapses to circuits with memristors.

popularity

Part one of this series considered the mechanisms of learning and memory in biological brains.

Each neuron has many fibers, which connect to adjacent neurons at synapses. The concentration of ions such as potassium and calcium inside the cell is different from the concentration outside.

The cellular membrane thus serves as a capacitor. When a stimulus is received, the neuron releases neurotransmitters that enhance or depress the flow of ions through the cellular membranes of nearby neurons, changing the potential difference. If the accumulated ions at a cellular membrane exceed a threshold value, a current spike discharges, propagating along the neural fibers. Accumulation and discharge continue to occur, forming a series of current spikes until the neurotransmitter dissipates.

Biological brains are organized in layers, so it is possible to say that a particular neuron lies “before” or “after” a given synapse. The optic nerve receives visual signals before the visual cortex, for example. Any given synapse might receive current spikes from both pre- and post-synaptic neurons.

The relative timing of these spikes is a measure of the correlation between them. If a signal from the post-synaptic neuron is received after the pre-synaptic signal, then one may depend on the other and the connection becomes stronger. If there is no correlation, the connection may become weaker. This synaptic “plasticity” is believed to be fundamental to such “intelligent” processes as memory, learning, and creativity.

Ion channels and filamentary RRAM
Designers of neuromorphic systems seek to replicate neural mechanisms in electronic hardware. As a first step, individual “neurons” must be able to sum the inputs from upstream devices, apply predetermined weights, and pass the result to downstream devices. While the exact connections between artificial neurons depend on the specific network design, the process is mathematically a straightforward matrix multiplication.

Nonetheless, as the size of the matrix increases, performance ultimately is limited by the von Neumann bottleneck—the need to read both the input signal and the matrix of synaptic weights. This bottleneck provides one motivation for the introduction of memristor-based architectures. If the conductance of a circuit element is changed by current flow through the element — a characteristic of phase change memory and OxRAM, among other devices — then the device can serve as an accumulator, requiring a predefined number of current pulses to switch from zero to one. Ohm’s Law can be used to define the output current of the device as the analog product of the applied current and the device conductance.

While this simple mechanism can help overcome the von Neumann bottleneck, that alone cannot emulate the behavior seen in biological brains. Biological synapses become stronger with use but weaker if not reinforced. The effect of a particular series of ion current spikes in a biological system depends on timing, but also on the neurotransmitter concentration in the system when the spike occurs; the same sound can be irritating or soothing depending on context.

Reproducing more complex biological behaviors requires a closer look at the physics of memristor devices. In most proposed devices, a conducting filament forms under the influence of an external stimulus. In OxRAM, oxygen vacancies migrate into or out of the filament region, making it more or less resistive. In phase change memory (PCM), current pulses heat a chalcogenide material, driving formation of a more conductive crystalline phase. In conductive bridge memory (CBRAM), ions flow through an electrolyte layer between an anode and cathode.

In all of these cases, repeated signals create a thicker, more stable conductive filament. This reinforcement of connections can be compared to the biological transition between short term and long term memory. Devices can be designed to allow independent control of the conductance and stability of the filament. For example, Selina La Barbera, a research engineer at CEA-Leti, and her colleagues at the French National Center for Scientific Research, investigated dendritic filament formation in solid-state electrochemical cells. In their devices, the conductive path might consist of many fine, relatively unstable filaments, or a single thick fiber, depending on the past history of the device. The applied input current controlled the average conductance of the device, while the number of pulses defined the filament diameter and stability. More complex filament structures can begin to capture the richness of biological synapse behavior.

Serendipity and process variation
The next important aspect of biological synapses is their randomness. Struggles to find the right word and sudden flashes of insight both demonstrate that synapses do not always respond to stimuli in a predictable way. Biological randomness comes from local variations in the distribution of ions and neurotransmitters, or in the permeability of cell membranes. Combined, these effects give synaptic behavior a stochastic character.

In biological brains, the variability of ion flow contributes to the formation of chains of current spikes, rather than isolated signals, and thus to the brain’s ability to detect relationships between signals. The gradual decay of existing connections allows new connections to form. Randomness allows networks to recover from errors or even damage. A noisy signal might affect some connections, but not all, leaving enough information for the network to determine the correct result.

Electronic devices, in contrast, are valued for their predictability. Transistors are expected to turn on or off at clearly defined voltages, calculations are expected to give the same result every time. Memristors have so far had limited success in conventional memory applications in part because they have struggled to achieve consistent, repeatable switching performance. The percolation mechanisms involved in filament formation are inherently random. Defects and process variability cause switching performance to vary from device-to-device and cycle-to-cycle. For neuromorphic applications, this randomness becomes an advantage.


Fig. 1: Memristor concept. Source: Wikipedia

Conventional neural networks begin with random weights to encourage the network to explore all corners of its parameter space, rather than converging to a local minimum. Creating true randomness in an electronic circuit is difficult, though, requiring dedicated computation resources. In neural network circuits, Vivek Parmar and Manan Suri at the Indian Institute of Technology noted, memristors can simplify the introduction of randomness. In neuromorphic circuits, the stochastic response of memristor devices can support brain-like plasticity.


Fig. 2: Memristor prototype. Source: HP Labs

To summarize, conductance in memristors changes with applied current, allowing them to behave as accumulating “neurons” and avoid the von Neumann bottleneck. With thoughtful device design, their behavior can emulate the history-dependent complexity seen in biological connections. And their inherent randomness resembles the stochastic behavior of ion current spikes. One important characteristic remains—the ability to forget, to RESET a memristor to the non-conductive state as easily as it was SET to conduct.

Remaking memristor synapses
An ideal device would transition continuously and uniformly from fully insulating to maximum conductance and back. Electrical pulses could be used to program the device to the desired value and to adjust the resistance upward or downward in precise increments. Sadly, real devices stubbornly fail to conform to this ideal. In particular, continuous conduction behavior has been difficult to achieve. In phase change memory, for instance, the SET and RESET transitions depend on different physical transformations. To SET the device, pulses heat and crystallize the chalcogenide layer. This phase transition can progress one step at a time, according to a clearly defined pulse sequence. To RESET the device, however, requires melting of the conductive filament, followed by rapid quenching to preserve the amorphous state. It is not a progressive transition. In OxRAMs, analog storage of multiple stable values, requires the use of gradually increasing programming currents. The most stable behavior can be found in the fully “ON” and “OFF” states.

Artificial synapse designs must accommodate the characteristics of the devices being used. One way to do this is by using two or more memristors to emulate a single synapse.

One design using PCM devices connects two memristors in opposite directions, so that a given pulse pushes one or the other toward crystallization, but not both. This combination can model both enhanced and depressed synaptic weights. When either device reaches maximum conductance, the circuit resets both, then supplies enough pulses to restore the relative weights of the pair.

In another approach, Johannes Bill and Robert Legenstein at the University of Technology in Austria connected multiple bistable memristors in parallel. While each of them individually behaves in a binary manner, a single pulse will affect them differently, causing some to transition and not others. This use of multiple memristors is analogous to the multiple ion channels associated with a single biological synapse.

From single proof-of-concept memristor-based “synapses,” the next step is a complete system that can begin to address practical problems. The next installment in this series will consider practical neuromorphic systems in more detail.

Related Stories
Neuromorphic Computing–Modeling The Brain
Competing models vie to show how the brain works, but none is perfect.
Pessimism, Optimism And Neuromorphic Computing
Whether machines solve problems like humans isn’t important.
Terminology Beyond Von Neumann
Neuromorphic computing and neural networks are not the same thing.
What Happened To ReRAM?
After years of delays, this next-gen memory is finally gaining traction.
What’s New At Hot Chips
Neuromorphic computing and machine learning dominate the leading edge of chip design this year.



1 comments

yt says:

Thank you so much for providing such impressed article. This gave me many insights.

One point I’d like to clarify is about randomness and ideal characteristic of the device .If the brain-mimicking model includes the randomness or stochastic behavior, then the device or its system should implement randomness. But, so far, most of current neuromorphic system works are handling with the static network, namely DNN, CNN…, so the device is required to have good linearity/repeatability as you mentioned in the article.
Randomness is bit different issue which relates to how it is modeled. For exam, in the spiking model, stochasticity might have more meaningful to emulate human brain with cobination of STDP learning model and so on.

Just my comment, and I’m looking forward to reading your next coming articles.
Best Regards
yt

Leave a Reply


(Note: This name will be displayed publicly)