Co-Design View of Cross-Bar Based Compute-In-Memory


A new review paper titled "Compute in-Memory with Non-Volatile Elements for Neural Networks: A Review from a Co-Design Perspective" was published by researchers at Argonne National Lab, Purdue University, and Indian Institute of Technology Madras. "With an over-arching co-design viewpoint, this review assesses the use of cross-bar based CIM for neural networks, connecting the material proper... » read more

Energy-Efficient Execution Scheme For Dynamic Neural Networks on Heterogeneous MPSoCs


A technical paper titled "Map-and-Conquer: Energy-Efficient Mapping of Dynamic Neural Nets onto Heterogeneous MPSoCs" was published (preprint) by researchers at LAMIH/UMR CNRS, Universite Polytechnique Hauts-de-France and UC Irvine. Abstract "Heterogeneous MPSoCs comprise diverse processing units of varying compute capabilities. To date, the mapping strategies of neural networks (NNs) onto ... » read more

Review of Methods to Design Secure Memristor Computing Systems


A technical paper titled "Review of security techniques for memristor computing systems" was published by researchers at Israel Institute of Technology, Friedrich Schiller University Jena (Germany), and Leibniz Institute of Photonic Technology (IPHT). Abstract "Neural network (NN) algorithms have become the dominant tool in visual object recognition, natural language processing, and robotic... » read more

Simulating Reality: The Importance Of Synthetic Data In AI/ML Systems For Radar Applications


Artificial intelligence and machine learning (AI/ML) are driving the development of next-generation radar perception. However, these AI/ML-based perception models require enough data to learn patterns and relationships to make accurate predictions on new, unseen data and scenarios. In the field of radar applications, the data used to train these models is often collected from real-world meas... » read more

Research Bits: Jan. 24


Transistor-free compute-in-memory Researchers from the University of Pennsylvania, Sandia National Laboratories, and Brookhaven National Laboratory propose a transistor-free compute-in-memory (CIM) architecture to overcome memory bottlenecks and reduce power consumption in AI workloads. "Even when used in a compute-in-memory architecture, transistors compromise the access time of data," sai... » read more

Research Bits: Jan. 17


Ionic circuit for neural nets Researchers at Harvard University and DNA Script developed an ionic circuit comprising hundreds of ionic transistors for neural net computing. While ions in water move slower than electrons in semiconductors, the team noted that the diversity of ionic species with different physical and chemical properties could be harnessed for more diverse information process... » read more

Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU (Ecole Polytechnique Montreal, IBM, Mila, CMC)


A new technical paper titled "BARVINN: Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU" was written by researchers at Ecole Polytechnique Montreal, IBM, Mila and CMC Microsystems. It was accepted for publication in the 2023, 28th Asia and South Pacific Design Automation Conference (ASP-DAC 2023) in Japan. Abstract: "We present a DNN accelerator that allows inference at arbitr... » read more

Achieving Greater Accuracy In Real-Time Vision Processing With Transformers


Transformers, first proposed in a Google research paper in 2017, were initially designed for natural language processing (NLP) tasks. Recently, researchers applied transformers to vision applications and got interesting results. While previously, vision tasks had been dominated by convolutional neural networks (CNNs), transformers have proven surprisingly adaptable to vision tasks like image cl... » read more

Neural Architecture & Hardware Accelerator Co-Design Framework (Princeton/ Stanford)


A new technical paper titled "CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework" was published by researchers at Princeton University and Stanford University. "Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either... » read more

Lithography Modeling: Data Augmentation Framework


A new technical paper titled "An Adversarial Active Sampling-based Data Augmentation Framework for Manufacturable Chip Design" was published by researchers at the University of Texas at Austin, Nvidia, and the California Institute of Technology. Abstract: "Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable. It requires rigorous simulation... » read more

← Older posts Newer posts →