中文 English

Research Bits: June 14


Photonic deep neural network chip Engineers from the University of Pennsylvania built a photonic deep neural network on a 9.3 square millimeter chip they say is faster and more efficient at classifying images, with the ability to process nearly two billion images a second. The chip uses a series of waveguides that form 'neutron layers' mimicking the brain. “Our chip processes information ... » read more

Analog Edge Inference with ReRAM


Abstract "As the demands of big data applications and deep learning continue to rise, the industry is increasingly looking to artificial intelligence (AI) accelerators. Analog in-memory computing (AiMC) with emerging nonvolatile devices enable good hardware solutions, due to its high energy efficiency in accelerating the multiply-and-accumulation (MAC) operation. Herein, an Applied Materials... » read more

Exploring far-from-equilibrium ultrafast polarization control in ferroelectric oxides with excited-state neural network quantum molecular dynamics


New academic paper out of USC Viterbi School of Engineering: Abstract "Ferroelectric materials exhibit a rich range of complex polar topologies, but their study under far-from-equilibrium optical excitation has been largely unexplored because of the difficulty in modeling the multiple spatiotemporal scales involved quantum-mechanically. To study optical excitation at spatiotemporal scales w... » read more

A Framework For Ultra Low-Power Hardware Accelerators Using NNs For Embedded Time Series Classification


In embedded applications that use neural networks (NNs) for classification tasks, it is important to not only minimize the power consumption of the NN calculation, but of the whole system. Optimization approaches for individual parts exist, such as quantization of the NN or analog calculation of arithmetic operations. However, there is no holistic approach for a complete embedded system design ... » read more

Next Generation Reservoir Computing


Abstract: "Reservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural n... » read more

Case Study — Deep Learning For Corner Fill Inspection And Metrology On Integrated Circuits


CyberOptics utilized deep learning to accurately inspect the corner fill on integrated circuits (ICs) produced by a large memory supplier. Traditional methods of inspection showed limitations in their ability to entirely detect the presence and absence of fill, indicating that a more advanced approach was necessary. CyberOptics drew on its large pool of algorithm and neural network expertise to... » read more

Adaptive NN-Based Root Cause Analysis in Volume Diagnosis for Yield Improvement


Abstract "Root Cause Analysis (RCA) is a critical technology for yield improvement in integrated circuit manufacture. Traditional RCA prefers unsupervised algorithms such as Expectation Maximization based on Bayesian models. However, these methods are severely limited by the weak predictive capability of statistical models and can’t effectively transfer the yield learning experience from old... » read more

Absence of Barren Plateaus in Quantum Convolutional Neural Networks


Abstract:  Quantum neural networks (QNNs) have generated excitement around the possibility of efficiently analyzing quantum data. But this excitement has been tempered by the existence of exponentially vanishing gradients, known as barren plateau landscapes, for many QNN architectures. Recently, quantum convolutional neural networks (QCNNs) have been proposed, involving a sequence of convol... » read more

Energy-efficient memcapacitor devices for neuromorphic computing


Abstract Data-intensive computing operations, such as training neural networks, are essential for applications in artificial intelligence but are energy intensive. One solution is to develop specialized hardware onto which neural networks can be directly mapped, and arrays of memristive devices can, for example, be trained to enable parallel multiply–accumulate operations. Here we show that ... » read more

Hardware Architecture and Software Stack for PIM Based on Commercial DRAM Technology


Abstract: "Emerging applications such as deep neural network demand high off-chip memory bandwidth. However, under stringent physical constraints of chip packages and system boards, it becomes very expensive to further increase the bandwidth of off-chip memory. Besides, transferring data across the memory hierarchy constitutes a large fraction of total energy consumption of systems, and the ... » read more

← Older posts