Improving ML-Based Device Modeling Using Variational Autoencoder Techniques

A technical paper titled “Improving Semiconductor Device Modeling for Electronic Design Automation by Machine Learning Techniques” was published by researchers at Commonwealth Scientific and Industrial Research Organisation (CSIRO), Peking University, National University of Singapore, and University of New South Wales. Abstract: "The semiconductors industry benefits greatly from the integ... » read more

Photonic-Electronic SmartNIC With Fast and Energy-Efficient Photonic Computing Cores (MIT)

A technical paper titled “Lightning: A Reconfigurable Photonic-Electronic SmartNIC for Fast and Energy-Efficient Inference” was published by researchers at Massachusetts Institute of Technology (MIT). Abstract: "The massive growth of machine learning-based applications and the end of Moore's law have created a pressing need to redesign computing platforms. We propose Lightning, the first ... » read more

Low-Power Heterogeneous Compute Cluster For TinyML DNN Inference And On-Chip Training

A new technical paper titled "DARKSIDE: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training" was published by researchers at University of Bologna and ETH Zurich. Abstract "On-chip deep neural network (DNN) inference and training at the Extreme-Edge (TinyML) impose strict latency, throughput, accuracy, and flexibility requirements. Heterogeneous clus... » read more

ISA and Microarchitecture Extensions Over Dense Matrix Engines to Support Flexible Structured Sparsity for CPUs (Georgia Tech, Intel Labs)

A technical paper titled "VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs" was published (preprint) by researchers at Georgia Tech and Intel Labs. Abstract: "Deep Learning (DL) acceleration support in CPUs has recently gained a lot of traction, with several companies (Arm, Intel, IBM) announcing products with specialized matrix engines accessible v... » read more

Multiexpert Adversarial Regularization For Robust And Data-Efficient Deep Supervised Learning

Deep neural networks (DNNs) can achieve high accuracy when there is abundant training data that has the same distribution as the test data. In practical applications, data deficiency is often a concern. For classification tasks, the lack of enough labeled images in the training set often results in overfitting. Another issue is the mismatch between the training and the test domains, which resul... » read more

DNN-Opt, A Novel Deep Neural Network (DNN) Based Black-Box Optimization Framework For Analog Sizing

This technical paper titled "DNN-Opt: An RL Inspired Optimization for Analog Circuit Sizing using Deep Neural Networks" is co-authored from researchers at The University of Texas at Austin, Intel, University of Glasgow. The paper was a best paper candidate at DAC 2021. "In this paper, we present DNN-Opt, a novel Deep Neural Network (DNN) based black-box optimization framework for analog sizi... » read more

Gemmini: Open-source, Full-Stack DNN Accelerator Generator (DAC Best Paper)

This technical paper titled "Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration" was published jointly by researchers at UC Berkeley and a co-author from MIT.  The research was partially funded by DARPA and won DAC 2021 Best Paper. The paper presents Gemmini, "an open-source, full-stack DNN accelerator generator for DNN workloads, enabling end-to-e... » read more

OverlapNet: Loop Closing for LiDAR-based SLAM

Abstract: "Simultaneous localization and mapping (SLAM) is a fundamental capability required by most autonomous systems. In this paper, we address the problem of loop closing for SLAM based on 3D laser scans recorded by autonomous cars. Our approach utilizes a deep neural network exploiting different cues generated from LiDAR data for finding loop closures. It estimates an image overlap gene... » read more

Mapping Transformation Enabled High-Performance and Low-Energy Memristor-Based DNNs

Abstract: "When deep neural network (DNN) is extensively utilized for edge AI (Artificial Intelligence), for example, the Internet of things (IoT) and autonomous vehicles, it makes CMOS (Complementary Metal Oxide Semiconductor)-based conventional computers suffer from overly large computing loads. Memristor-based devices are emerging as an option to conduct computing in memory for DNNs to make... » read more

NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark

Abstract:   "Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures, and networks can be a great convenience for fast early-stage design space exploration of CIM hardw... » read more

← Older posts