Enabling Training of Neural Networks on Noisy Hardware


Abstract:  "Deep neural networks (DNNs) are typically trained using the conventional stochastic gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train networks on non-ideal analog hardware composed of resistive device arrays with non-symmetric conductance modulation characteristics. Recently we proposed a new algorithm, the Tiki-Taka algorithm, that overcomes t... » read more

Accelerating Inference of Convolutional Neural Networks Using In-memory Computing


Abstract: "In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in (1) time complexity by mapping the synaptic weights of a neural-network layer to the devices of a... » read more

Accelerating Inference of Convolutional Neural Networks Using In-memory Computing


Abstract: "In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in (1) time complexity by mapping the synaptic weights of a neural-network layer to the devices of an ... » read more