Knowledge Center
Navigation
Knowledge Center

Spiking Neural Network (SNN)

A type of neural network that attempts to more closely model the brain.
popularity

Description

Spiking neural networks (SNNs) are a type of neural network based around changes to an existing state. Spike-based approaches seek to more closely model the dynamics of learning in biological brains, with chains of signal spikes corresponding to incoming stimuli spread out over time.

One key benefit would be reducing the amount of power required for AI processing: compared to a CNN, which measures everything all the time, SNNs deal with events. These events happen only when enough spikes are accumulated in an artificial neuron such that a certain threshold is crossed, at which point a spike is sent to downstream neurons.

SNN implementations are an active area of research and development, both in academia and companies. Some implementations are more brain-like than others.

The simplest mathematical models and hardware implementations that can capture these most basic behaviors are “integrate and fire” models. A neuron collects input spikes from upstream neurons and fires a spike downstream whenever those inputs exceed a threshold.

The next step up in sophistication, “leaky integrate and fire” models, introduce a decay period. The potential difference at the cellular membrane dissipates over time. To cause a downstream spike, the rate of inputs must be high enough to raise the potential difference more rapidly than it leaks away. These models introduce a memory effect. That is, the state variable evolves over time, and the spiking behavior depends on that history.


FIF and LIF neuron behavior, idealized for illustration. Note that, in the second case, the threshold is never reached due to the leakage. Neurons may also have a refractory period during which they can accumulate but not fire. Source: Bryon Moyer/Semiconductor Engineering

Several companies are currently working to commercialize SNNs. Both analog and digital versions are being explored, in CMOS and alternative technologies. A challenge at present is training SNNs, as certain approaches cannot use transcoded classical training results.