Zeroing In On Biological Computing

The challenges of developing artificial neurons with Mott insulators.

popularity

Artificial spiking neural networks need to replicate both excitatory and inhibitory biological neurons in order to emulate the neural activation patterns seen in biological brains.

Doing this with CMOS-based designs is challenging because of the large circuit footprint required. However, researchers at HP Labs observed that one biologically plausible model, the Hodgkins-Huxley model, is mathematically equivalent to a system with two Mott insulators and parallel capacitors. In their Mott “neuristor” circuit, the four state variables are the metallic channel radii of the two Mott devices, plus the two capacitances.

Mott insulators have an abrupt transition from insulator to metal, which can be controlled either thermally or electrically. They can be modeled as a two-dimensional array of cells forming a resistor network. Each component cell may be in the low or high resistance state. The neuron fires when the number of low resistance cells reaches a critical threshold.

Thermal fluctuations and variations in composition and structure can cause the threshold to vary, giving the overall firing behavior of the device a stochastic character. While the Mott transition takes place at cryogenic temperatures in many materials, VO2 offers a more accessible transition at 340K.

In the HP Labs design, a series of electrical spikes gradually heats the device toward the transition temperature. When the threshold is reached, the metallic channel radius increases rapidly and the capacitor discharges into the circuit. Before another spike can occur, the capacitor must recover. Partial cooling between input spikes, corresponding to the “leakage” and signal decay seen in biological neurons, depends on the thermal conductivity of the substrate and contacts.

A version of this design, presented at the 2019 IEEE Electron Device Meeting by Wei Yi and colleagues at HRL Laboratories, can realize three different types of neurons by varying the impedance of the input stage:

  • Tonic: Fire a continuous train of spikes when stimulated by steady DC current.
  • Phasic: Fire only a single spike at the onset of steady current input, then quiescent.
  • Mixed-mode: Phasic burst followed by train of tonic spikes.

They were able to emulate 23 different firing behaviors that are known to occur in biological neurons. Different firing characteristics can be achieved through controllable circuit parameters like resistance, control of the area ratios in paired devices, and so on.

But earlier this year, Javier del Valle and colleagues at UC San Diego noted that using capacitors to store the state of each cell limits scalability. The capacitance must be larger than the parasitic capacitances of the electrode lines. Instead, they proposed the use of heat flow to perform computational tasks. In their design, also based on Mott insulators, current spikes from upstream neurons pass through a resistive heater that is thermally coupled to the firing element.

Neurons need learning rules
While spiking neuron behavior is a prerequisite for a spiking neural network implementation, it is not the only requirement. Alexandre Valentian, a research engineer at CEA-Leti, explained that such a network also needs to be able to encode data as chains of spikes, and needs to implement spike-driven learning rules. Outputs, also in the form of spikes, need to be translated to actions, or data interpretations, or whatever the network is intended to do. Spiking neural networks “learn” differently from conventional artificial neural networks, and as a result likely will require a ground-up reimagining of machine learning algorithms and neural network hardware. Algorithms for spiking neural networks depend on sensors that produce spike chains rather than numeric values and training sets based on those chains, both of which are only beginning to emerge.

In biological brains, as noted in part 1, different spiking behaviors are associated with different cognitive tasks. Some research suggests how sensory information might be encoded, but more abstract cognitive functions remain unclear. Though data encoding is a software task and doesn’t need to be defined when the network hardware is built, different approaches may be more or less compatible with specific hardware characteristics.

There are three primary alternatives.The simplest, spike-count coding, ignores variability in the input signal and simply counts spikes received. Spike-count coding might, for instance, cause a neuron to fire after a certain number of objects pass a sensor.

The second alternative, rate coding, is suitable for use with “leaky” neurons. The neuron only fires if the threshold number of objects passes the sensor within a given amount of time.

The third alternative, temporal coding, begins to resemble biological behavior. It considers the intensity of data spikes, not just their frequency. Stronger signals — larger objects passing a sensor, louder noises in an area — lead to more immediate spikes. Pierre Falez and colleagues at the University of Lille explained that increasing the firing threshold for such a network allows it to recognize larger patterns. It’s difficult to know the “optimal” firing threshold in advance, though. Different potentially interesting patterns may have different data signatures. This group’s approach to firing optimization trained neurons to fire at a target time. The training algorithm finds the threshold value that allows them to do so.

If neurons are the brain’s signal processors, synapses are the interconnects. In biological brains, synaptic connections are plastic. As the individual learns, frequently used associations get stronger, while less used associations get weaker. “What fires together, wires together,“ in other words. Moreover, biological brains can retain these associations for seconds, or for decades.

A successful spiking neural network will need learning rules that accommodate the temporal nature of spike-based data. One of the most commonly proposed alternatives is spike-timing-dependent plasticity, which updates synaptic weights according to the difference in firing time between input and output neurons. When a pre-synaptic neuron spikes before a post-synaptic neuron, the connection is reinforced. If the post-synaptic neuron fires first, the connection is weakened. This rule is intuitive, but inherently local. In biological brains, some stimuli can trigger activation across large brain regions. It’s not yet known how the brain establishes or optimizes such global associations.

How to make all those wires
When writing about neural networks, the importance of connectivity comes up again and again. It’s not unusual for a biological neuron to connect directly to several thousand of its peers. Though integrated circuits routinely incorporate millions or billions of transistors, the gates and small logic blocks that comprise a circuit have much more limited connectivity. Three-dimensional circuit integration offers many advantages in many applications, helping to shrink mobile devices and wedge sensors into smaller spaces. For neural networks, though, it’s not just an advantage. It’s essential.

Related Stories
Spiking Neural Networks Place Data In Time
How efficiently can we mimic biological spiking process of neurons and synapses, and is CMOS a good choice for neural networks?
Spiking Neural Networks: Research Projects Or Commercial Products?
Opinions differ widely, but in this space that isn’t unusual.
Scaling Up Compute-In-Memory Accelerators
New research points to progress and problems in a post-von Neumann world.
Compute-In Memory Accelerators Up-End Network Design Tradeoffs
Compute paradigm shifting as more data needs to be processed more quickly.
Are Better Machine Training Approaches Ahead?
Why unsupervised, reinforcement and Hebbian approaches are good for some things, but not others.
What’s A Mott FET?
Strange physics and future devices.



Leave a Reply


(Note: This name will be displayed publicly)