Knowledge Center
Knowledge Center

Recurrent Neural Network (RNN)

An artificial neural network that finds patterns in data using other data stored in memory.


Used for speech, text recognition and other pattern recognition, the recurrent neural networks (RNN) is an artificial neural network that finds patterns in data using memory. An RNN uses two sources of input to produce an answer: the current sample and information held in memory.

RNNs are said to require more resources to do the inferencing and training. The inferencing is done step by step, looking at new data one at a time and comparing it to data in memory. “We used a recurrent neural network because it has memory,” said MbientLab CEO Laura Kassovic. “That means that whenever you train it, it has memory in the sense that current inputs are affected by previous inputs, which is normal. Because If you’re doing a speech recognition algorithm, the previous syllable affects the next syllable. Together they create a word, so you need to have a memory to know what the full word is.” The data is weighted sequentially: for every weight you have one mathematical operation, as Carlos Macian, senior director of innovation for eSilicon EMEA describes in the video clip below. “There is a lot more bringing in and sending out weights and data. You need to optimize the memory itself, the access to the memory and the data.”

RNNs also use hidden states to circulate the data.

The LSTM is a type of RNN.

The RNN is falling out of favor, as better tools become available. Recurrent neural networks are essential where the time dimension is a critical factor, such as security or mil/aero applications. But the data being collected needs to be scrubbed down to what is useful as quickly as possible, and that’s where performance really tends to get bogged down. At this point, there is no obvious solution for that.