Accelerating Inference of Convolutional Neural Networks Using In-memory Computing


Abstract: "In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in (1) time complexity by mapping the synaptic weights of a neural-network layer to the devices of a... » read more

A graph placement methodology for fast chip design


Abstract "Chip floorplanning is the engineering task of designing the physical layout of a computer chip. Despite five decades of research1, chip floorplanning has defied automation, requiring months of intense effort by physical design engineers to produce manufacturable layouts. Here we present a deep reinforcement learning approach to chip floorplanning. In under six hours, our method autom... » read more

Accelerating Inference of Convolutional Neural Networks Using In-memory Computing


Abstract: "In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in (1) time complexity by mapping the synaptic weights of a neural-network layer to the devices of an ... » read more

Neuromorphic electronics based on copying and pasting the brain


Abstract: "Reverse engineering the brain by mimicking the structure and function of neuronal networks on a silicon integrated circuit was the original goal of neuromorphic engineering, but remains a distant prospect. The focus of neuromorphic engineering has thus been relaxed from rigorous brain mimicry to designs inspired by qualitative features of the brain, including event-driven sign... » read more

Energy-efficient memcapacitor devices for neuromorphic computing


Abstract Data-intensive computing operations, such as training neural networks, are essential for applications in artificial intelligence but are energy intensive. One solution is to develop specialized hardware onto which neural networks can be directly mapped, and arrays of memristive devices can, for example, be trained to enable parallel multiply–accumulate operations. Here we show that ... » read more

Modeling electrical conduction in resistive-switching memory through machine learning


Published in AIP Advances on July 13, 2021. Read the full paper (open access). Abstract Traditional physical-based models have generally been used to model the resistive-switching behavior of resistive-switching memory (RSM). Recently, vacancy-based conduction-filament (CF) growth models have been used to model device characteristics of a wide range of RSM devices. However, few have focused o... » read more

2D materials–based homogeneous transistor-memory architecture for neuromorphic hardware


Abstract "In neuromorphic hardware, peripheral circuits and memories based on heterogeneous devices are generally physically separated. Thus exploring homogeneous devices for these components is an important issue for improving module integration and resistance matching. Inspired by ferroelectric proximity effect on two-dimensional materials, we present a tungsten diselenide-on-LiNbO3 cascaded... » read more

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator


Abstract: "Recent work demonstrated the promise of using resistive random access memory (ReRAM) as an emerging technology to perform inherently parallel analog domain in-situ matrix-vector multiplication—the intensive and key computation in deep neural networks (DNNs). One key problem is the weights that are signed values. However, in a ReRAM crossbar, weights are stored as conductance of... » read more

Graphene-based PUFs that are reconfigurable and resilient to ML attacks


Researchers at Pennsylvania State University propose using graphene to create physically unclonable functions (PUFs) that are energy efficient, scalable, and secure against AI attacks. Abstract "Graphene has a range of properties that makes it suitable for building devices for the Internet of Things. However, the deployment of such devices will also likely require the development of s... » read more

REDUCT: Keep It Close, Keep It Cool – Scaling DNN Inference on Multi-Core CPUs with Near-Cache Compute


Abstract—"Deep Neural Networks (DNN) are used in a variety of applications and services. With the evolving nature of DNNs, the race to build optimal hardware (both in datacenter and edge) continues. General purpose multi-core CPUs offer unique attractive advantages for DNN inference at both datacenter [60] and edge [71]. Most of the CPU pipeline design complexity is targeted towards optimizin... » read more

← Older posts Newer posts →