Optimizing Projected PCM for Analog Computing-In-Memory Inferencing (IBM)


A new technical paper titled "Optimization of Projected Phase Change Memory for Analog In-Memory Computing Inference" was published by researchers at IBM Research. "A systematic study of the electrical properties-including resistance values, memory window, resistance drift, read noise, and their impact on the accuracy of large neural networks of various types and with tens of millions of wei... » read more

Low-Power Heterogeneous Compute Cluster For TinyML DNN Inference And On-Chip Training


A new technical paper titled "DARKSIDE: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training" was published by researchers at University of Bologna and ETH Zurich. Abstract "On-chip deep neural network (DNN) inference and training at the Extreme-Edge (TinyML) impose strict latency, throughput, accuracy, and flexibility requirements. Heterogeneous clus... » read more

Asynchronously Parallel Optimization Method For Sizing Analog Transistors Using Deep Neural Network Learning


A new technical paper titled "APOSTLE: Asynchronously Parallel Optimization for Sizing Analog Transistors Using DNN Learning" was published by researchers at UT Austin and Analog Devices. Abstract "Analog circuit sizing is a high-cost process in terms of the manual effort invested and the computation time spent. With rapidly developing technology and high market demand, bringing automated s... » read more

Review of Tools & Techniques for DL Edge Inference


A new technical paper titled "Efficient Acceleration of Deep Learning Inference on Resource-Constrained Edge Devices: A Review" was published in "Proceedings of the IEEE" by researchers at University of Missouri and Texas Tech University. Abstract: Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted in breakthroughs in many areas. However, deploying thes... » read more

Multiexpert Adversarial Regularization For Robust And Data-Efficient Deep Supervised Learning


Deep neural networks (DNNs) can achieve high accuracy when there is abundant training data that has the same distribution as the test data. In practical applications, data deficiency is often a concern. For classification tasks, the lack of enough labeled images in the training set often results in overfitting. Another issue is the mismatch between the training and the test domains, which resul... » read more

Using Silicon Photonics To Reduce Latency On Edge Devices


A new technical paper titled "Delocalized photonic deep learning on the internet’s edge" was published by researchers at MIT and Nokia Corporation. “Every time you want to run a neural network, you have to run the program, and how fast you can run the program depends on how fast you can pipe the program in from memory. Our pipe is massive — it corresponds to sending a full feature-leng... » read more

Algorithm HW Framework That Minimizes Accuracy Degradation, Data Movement, And Energy Consumption Of DNN Accelerators (Georgia Tech)


This new research paper titled "An Algorithm-Hardware Co-design Framework to Overcome Imperfections of Mixed-signal DNN Accelerators" was published by researchers at Georgia Tech. According to the paper's abstract, "In recent years, processing in memory (PIM) based mixed-signal designs have been proposed as energy- and area-efficient solutions with ultra high throughput to accelerate DNN com... » read more

ML Architecture for Solving the Inverse Problem for Matter Wave Lithography: LACENET


This recent technical paper titled "Realistic mask generation for matter-wave lithography via machine learning" was published by researchers at University of Bergen (Norway). Abstract: "Fast production of large area patterns with nanometre resolution is crucial for the established semiconductor industry and for enabling industrial-scale production of next-generation quantum devices. Metasta... » read more

DNN-Opt, A Novel Deep Neural Network (DNN) Based Black-Box Optimization Framework For Analog Sizing


This technical paper titled "DNN-Opt: An RL Inspired Optimization for Analog Circuit Sizing using Deep Neural Networks" is co-authored from researchers at The University of Texas at Austin, Intel, University of Glasgow. The paper was a best paper candidate at DAC 2021. "In this paper, we present DNN-Opt, a novel Deep Neural Network (DNN) based black-box optimization framework for analog sizi... » read more

Gemmini: Open-source, Full-Stack DNN Accelerator Generator (DAC Best Paper)


This technical paper titled "Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration" was published jointly by researchers at UC Berkeley and a co-author from MIT.  The research was partially funded by DARPA and won DAC 2021 Best Paper. The paper presents Gemmini, "an open-source, full-stack DNN accelerator generator for DNN workloads, enabling end-to-e... » read more

← Older posts Newer posts →