In-Memory Computing: Techniques for Error Detection and Correction


A new technical paper titled "Error Detection and Correction Codes for Safe In-Memory Computations" was published by researchers at Robert Bosch, Forschungszentrum Julich, and Newcastle University. Abstract "In-Memory Computing (IMC) introduces a new paradigm of computation that offers high efficiency in terms of latency and power consumption for AI accelerators. However, the non-idealities... » read more

Optimizing Event-Based Neural Network Processing For A Neuromorphic Architecture


A new technical paper titled "Optimizing event-based neural networks on digital neuromorphic architecture: a comprehensive design space exploration" was published by imec, TU Delft and University of Twente. Abstract "Neuromorphic processors promise low-latency and energy-efficient processing by adopting novel brain-inspired design methodologies. Yet, current neuromorphic solutions still str... » read more

Research Bits: Feb. 6


Laser printer for photonic circuits Researchers from the University of Washington and University of Maryland propose a faster, cheaper way to fabricate and reconfigure photonic integrated circuits. The method uses a laser writer to write, erase, and modify circuits into a thin film of phase-change material similar to what is used for recordable CDs and DVDs. The researcher say the method co... » read more

Research Bits: Jan. 23


Memristor-based Bayesian neural network Researchers from CEA-Leti, CEA-List, and CNRS built a complete memristor-based Bayesian neural network implementation for classifying types of arrhythmia recordings with precise aleatoric and epistemic uncertainty. While Bayesian neural networks are useful for at sensory processing applications based on a small amount of noisy input data because they ... » read more

Novel Neuromorphic Artificial Neural Network Circuit Architecture


A technical paper titled “Mosaic: in-memory computing and routing for small-world spike-based neuromorphic systems” was published by researchers at CEA-LETI Université Grenoble Alpes, University of Zurich and ETH Zurich. Abstract: "The brain’s connectivity is locally dense and globally sparse, forming a small-world graph—a principle prevalent in the evolution of various species, sugg... » read more

Memory Devices-Based Bayesian Neural Networks For Edge AI


A new technical paper titled "Bringing uncertainty quantification to the extreme-edge with memristor-based Bayesian neural networks" was published by researchers at Université Grenoble Alpes, CEA, LETI, and CNRS. Abstract: "Safety-critical sensory applications, like medical diagnosis, demand accurate decisions from limited, noisy data. Bayesian neural networks excel at such tasks, offering... » read more

Neural Network Model Quantization On Mobile


The general definition of quantization states that it is the process of mapping continuous infinite values to a smaller set of discrete finite values. In this blog, we will talk about quantization in the context of neural network (NN) models, as the process of reducing the precision of the weights, biases, and activations. Moving from floating-point representations to low-precision fixed intege... » read more

Application-Optimized Processors


Executing a neural network on top of an NPU requires an understanding of application requirements, such as latency and throughput, as well as the potential partitioning challenges. Sharad Chole, chief scientist and co-founder of Expedera, talks about fine-grained dependencies, why processing packets out of order can help optimize performance and power, and when to use voltage and frequency scal... » read more

Thoughts On AI Consciousness


By Anda Ioana Enescu Buyruk and Catalin Tudor The rapid advancement of artificial intelligence (AI) has sparked profound discussions regarding the possibility of AI systems achieving consciousness. Such a development carries immense implications, forcing us to redirect our focus from studying the behavior of other organisms to scrutinizing ourselves. This article will delve into the concept ... » read more

CNN Hardware Architecture With Weights Generator Module That Alleviates Impact Of The Memory Wall


A technical paper titled “Mitigating Memory Wall Effects in CNN Engines with On-the-Fly Weights Generation” was published by researchers at Samsung AI Center and University of Cambridge. Abstract: "The unprecedented accuracy of convolutional neural networks (CNNs) across a broad range of AI tasks has led to their widespread deployment in mobile and embedded settings. In a pursuit for high... » read more

← Older posts Newer posts →