Asynchronously Parallel Optimization Method For Sizing Analog Transistors Using Deep Neural Network Learning


A new technical paper titled "APOSTLE: Asynchronously Parallel Optimization for Sizing Analog Transistors Using DNN Learning" was published by researchers at UT Austin and Analog Devices. Abstract "Analog circuit sizing is a high-cost process in terms of the manual effort invested and the computation time spent. With rapidly developing technology and high market demand, bringing automated s... » read more

Review of Tools & Techniques for DL Edge Inference


A new technical paper titled "Efficient Acceleration of Deep Learning Inference on Resource-Constrained Edge Devices: A Review" was published in "Proceedings of the IEEE" by researchers at University of Missouri and Texas Tech University. Abstract: Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted in breakthroughs in many areas. However, deploying thes... » read more

Memory and Energy-Efficient Batch Normalization Hardware


A new technical paper titled "LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training" was published by researchers at DGIST (Daegu Gyeongbuk Institute of Science and Technology). The work was supported by Samsung Research Funding Incubation Center. Abstract: "When training early-stage deep neural networks (DNNs), generating intermediate features via con... » read more

Multiexpert Adversarial Regularization For Robust And Data-Efficient Deep Supervised Learning


Deep neural networks (DNNs) can achieve high accuracy when there is abundant training data that has the same distribution as the test data. In practical applications, data deficiency is often a concern. For classification tasks, the lack of enough labeled images in the training set often results in overfitting. Another issue is the mismatch between the training and the test domains, which resul... » read more

Using Silicon Photonics To Reduce Latency On Edge Devices


A new technical paper titled "Delocalized photonic deep learning on the internet’s edge" was published by researchers at MIT and Nokia Corporation. “Every time you want to run a neural network, you have to run the program, and how fast you can run the program depends on how fast you can pipe the program in from memory. Our pipe is massive — it corresponds to sending a full feature-leng... » read more

Vulnerability of Neural Networks Deployed As Black Boxes Across Accelerated HW Through Electromagnetic Side Channels


This technical paper titled "Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel" was presented by researchers at Columbia University, Adobe Research and University of Toronto at the 31st USENIX Security Symposium in August 2022. Abstract: "Neural network applications have become popular in both enterprise and personal settings. Network solutions are tune... » read more

Simulation Framework to Evaluate the Feasibility of Large-scale DNNs based on CIM Architecture & Analog NVM


Technical paper titled "Accuracy and Resiliency of Analog Compute-in-Memory Inference Engines" from researchers at UCLA. Abstract "Recently, analog compute-in-memory (CIM) architectures based on emerging analog non-volatile memory (NVM) technologies have been explored for deep neural networks (DNNs) to improve scalability, speed, and energy efficiency. Such architectures, however, leverage ... » read more

Neuromorphic Chips & Power Demands


Research paper titled "A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware," from researchers at Graz University of Technology and Intel Labs. Abstract "Spike-based neuromorphic hardware holds the promise to provide more energy efficient implementations of Deep Neural Networks (DNNs) than standard hardware such as GPUs. But this requires to understand how D... » read more

Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices


Abstract:  "Recent advances in deep learning have been driven by ever-increasing model sizes, with networks growing to millions or even billions of parameters. Such enormous models call for fast and energy-efficient hardware accelerators. We study the potential of Analog AI accelerators based on Non-Volatile Memory, in particular Phase Change Memory (PCM), for software-equivalent accurate i... » read more

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator


Abstract: "Recent work demonstrated the promise of using resistive random access memory (ReRAM) as an emerging technology to perform inherently parallel analog domain in-situ matrix-vector multiplication—the intensive and key computation in deep neural networks (DNNs). One key problem is the weights that are signed values. However, in a ReRAM crossbar, weights are stored as conductance of... » read more

← Older posts