In Situ Backpropagation Strategy That Progressively Updates Neural Network Layers Directly in HW (TU Eindhoven)


A new technical paper titled "Hardware implementation of backpropagation using progressive gradient descent for in situ training of multilayer neural networks" was published by researchers at Eindhoven University of Technology. Abstract "Neural network training can be slow and energy-expensive due to the frequent transfer of weight data between digital memory and processing units. Neuromorp... » read more

Leveraging Machine Learning in Semiconductor Yield Analysis


Searching through wafer maps looking for spatial patterns is not only a very time-consuming task to be done manually, it’s also prone to human oversight and error, and nearly impossible in a large fab where there are thousands of wafers a day being processed. We developed a tool that applies automatic spatial pattern detection algorithms using ML, parametrizing pattern recognition and clas... » read more

Lower Energy, High Performance LLM on FPGA Without Matrix Multiplication


A new technical paper titled "Scalable MatMul-free Language Modeling" was published by UC Santa Cruz, Soochow University, UC Davis, and LuxiTech. Abstract "Matrix multiplication (MatMul) typically dominates the overall computational cost of large language models (LLMs). This cost only grows as LLMs scale to larger embedding dimensions and context lengths. In this work, we show that MatMul... » read more

MCU Changes At The Edge


Microcontrollers are becoming a key platform for processing machine learning at the edge due to two significant changes. First, they now can include multiple cores, including some for high performance and others for low power, as well as other specialized processing elements such as neural network accelerators. Second, machine learning algorithms have been pruned to the point where inferencing ... » read more

Research Bits: May 28


Nanofluidic memristive neural networks Engineers from EPFL developed a functional nanofluidic memristive device that relies on ions, rather than electrons and holes, to compute and store data. “Memristors have already been used to build electronic neural networks, but our goal is to build a nanofluidic neural network that takes advantage of changes in ion concentrations, similar to living... » read more

High-Level Synthesis Propels Next-Gen AI Accelerators


Everything around you is getting smarter. Artificial intelligence is not just a data center application but will be deployed in all kinds of embedded systems that we interact with daily. We expect to talk to and gesture at them. We expect them to recognize and understand us. And we expect them to operate with just a little bit of common sense. This intelligence is making these systems not just ... » read more

Fundamental Issues In Computer Vision Still Unresolved


Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that ... » read more

Research Bits: April 30


Sound waves in optical neural networks Researchers from the Max Planck Institute for the Science of Light and Massachusetts Institute of Technology found a way to build reconfigurable recurrent operators based on sound waves for photonic machine learning. They used light to create temporary acoustic waves in an optical fiber, which manipulate subsequent computational steps of an optical rec... » read more

In-Memory Computing: Techniques for Error Detection and Correction


A new technical paper titled "Error Detection and Correction Codes for Safe In-Memory Computations" was published by researchers at Robert Bosch, Forschungszentrum Julich, and Newcastle University. Abstract "In-Memory Computing (IMC) introduces a new paradigm of computation that offers high efficiency in terms of latency and power consumption for AI accelerators. However, the non-idealities... » read more

Optimizing Event-Based Neural Network Processing For A Neuromorphic Architecture


A new technical paper titled "Optimizing event-based neural networks on digital neuromorphic architecture: a comprehensive design space exploration" was published by imec, TU Delft and University of Twente. Abstract "Neuromorphic processors promise low-latency and energy-efficient processing by adopting novel brain-inspired design methodologies. Yet, current neuromorphic solutions still str... » read more

← Older posts