Compute-In Memory Accelerators Up-End Network Design Tradeoffs


An explosion in the amount of data, coupled with the negative impact on performance and power for moving that data, is rekindling interest around in-memory processing as an alternative to moving data back and forth between the memory and the processor. Compute-in-memory (CIM) arrays based on either conventional memory elements like DRAM and NAND flash, as well as emerging non-volatile memori... » read more

3D Neuromorphic Architectures


Matrix multiplication is a critical operation in conventional neural networks. Each node of the network receives an input signal, multiplies it by some predetermined weight, and passes the result to the next layer of nodes. While the nature of the signal, the method used to determine the weights, and the desired result will all depend on the specific application, the computational task is simpl... » read more

Toward Neuromorphic Designs


Part one of this series considered the mechanisms of learning and memory in biological brains. Each neuron has many fibers, which connect to adjacent neurons at synapses. The concentration of ions such as potassium and calcium inside the cell is different from the concentration outside. The cellular membrane thus serves as a capacitor. When a stimulus is received, the neuron releases neur... » read more