Research Bits: Jan. 24


Transistor-free compute-in-memory Researchers from the University of Pennsylvania, Sandia National Laboratories, and Brookhaven National Laboratory propose a transistor-free compute-in-memory (CIM) architecture to overcome memory bottlenecks and reduce power consumption in AI workloads. "Even when used in a compute-in-memory architecture, transistors compromise the access time of data," sai... » read more

Research Bits: Jan. 17


Ionic circuit for neural nets Researchers at Harvard University and DNA Script developed an ionic circuit comprising hundreds of ionic transistors for neural net computing. While ions in water move slower than electrons in semiconductors, the team noted that the diversity of ionic species with different physical and chemical properties could be harnessed for more diverse information process... » read more

Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU (Ecole Polytechnique Montreal, IBM, Mila, CMC)


A new technical paper titled "BARVINN: Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU" was written by researchers at Ecole Polytechnique Montreal, IBM, Mila and CMC Microsystems. It was accepted for publication in the 2023, 28th Asia and South Pacific Design Automation Conference (ASP-DAC 2023) in Japan. Abstract: "We present a DNN accelerator that allows inference at arbitr... » read more

Achieving Greater Accuracy In Real-Time Vision Processing With Transformers


Transformers, first proposed in a Google research paper in 2017, were initially designed for natural language processing (NLP) tasks. Recently, researchers applied transformers to vision applications and got interesting results. While previously, vision tasks had been dominated by convolutional neural networks (CNNs), transformers have proven surprisingly adaptable to vision tasks like image cl... » read more

Neural Architecture & Hardware Accelerator Co-Design Framework (Princeton/ Stanford)


A new technical paper titled "CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework" was published by researchers at Princeton University and Stanford University. "Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either... » read more

Lithography Modeling: Data Augmentation Framework


A new technical paper titled "An Adversarial Active Sampling-based Data Augmentation Framework for Manufacturable Chip Design" was published by researchers at the University of Texas at Austin, Nvidia, and the California Institute of Technology. Abstract: "Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable. It requires rigorous simulation... » read more

Using BDA To to Predict SAQP Pitch Walk


A new technical paper titled "Bayesian dropout approximation in deep learning neural networks: analysis of self-aligned quadruple patterning" was published by researchers at IBM TJ Watson Research Center and Rensselaer Polytechnic Institute. Find the technical paper here. Published November 2022.  Open Access. Scott D. Halle, Derren N. Dunn, Allen H. Gabor, Max O. Bloomfield, and Mark Sh... » read more

New Class of Electrically Driven Optical Nonvolatile Memory


A new technical paper titled "Electrical Programmable Multi-Level Non-volatile Photonic Random-Access Memory" was published by researchers at George Washington University, Optelligence, MIT, and the University of Central Florida. Researchers demonstrate "a multi-state electrically-programmed low-loss non-volatile photonic memory based on a broadband transparent phase change material (Ge2Sb2S... » read more

Training a ML model On An Intelligent Edge Device Using Less Than 256KB Memory


A new technical paper titled "On-Device Training Under 256KB Memory" was published by researchers at MIT and MIT-IBM Watson AI Lab. “Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a bro... » read more

Transistor-Free Compute-In-Memory Architecture


A new technical paper titled "Reconfigurable Compute-In-Memory on Field-Programmable Ferroelectric Diodes" was recently published by researchers at University of Pennsylvania, Sandia National Labs, and Brookhaven National Lab. The compute-in-memory design is different as it is completely transistor-free. “Even when used in a compute-in-memory architecture, transistors compromise the access... » read more

← Older posts Newer posts →