Real-Time Low Light Video Enhancement Using Neural Networks On Mobile


Video conferencing is a ubiquitous tool for communication, especially for remote work and social interactions. However, it is not always a straightforward plug and play experience, as adjustments may be needed to ensure a good audio and video setup. Lighting is one such factor that can be tricky to get right. A nicely illuminated video feed looks presentable in a meeting, but on the other hand,... » read more

Characteristics and Potential HW Architectures for Neuro-Symbolic AI


A new technical paper titled "Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture" was published by researchers at Georgia Tech, UC Berkeley, and IBM Research. Abstract: "The remarkable advancements in artificial intelligence (AI), primarily driven by deep neural networks, are facing challenges surrounding unsustainable computational trajectories, li... » read more

In Situ Backpropagation Strategy That Progressively Updates Neural Network Layers Directly in HW (TU Eindhoven)


A new technical paper titled "Hardware implementation of backpropagation using progressive gradient descent for in situ training of multilayer neural networks" was published by researchers at Eindhoven University of Technology. Abstract "Neural network training can be slow and energy-expensive due to the frequent transfer of weight data between digital memory and processing units. Neuromorp... » read more

Leveraging Machine Learning in Semiconductor Yield Analysis


Searching through wafer maps looking for spatial patterns is not only a very time-consuming task to be done manually, it’s also prone to human oversight and error, and nearly impossible in a large fab where there are thousands of wafers a day being processed. We developed a tool that applies automatic spatial pattern detection algorithms using ML, parametrizing pattern recognition and clas... » read more

Lower Energy, High Performance LLM on FPGA Without Matrix Multiplication


A new technical paper titled "Scalable MatMul-free Language Modeling" was published by UC Santa Cruz, Soochow University, UC Davis, and LuxiTech. Abstract "Matrix multiplication (MatMul) typically dominates the overall computational cost of large language models (LLMs). This cost only grows as LLMs scale to larger embedding dimensions and context lengths. In this work, we show that MatMul... » read more

MCU Changes At The Edge


Microcontrollers are becoming a key platform for processing machine learning at the edge due to two significant changes. First, they now can include multiple cores, including some for high performance and others for low power, as well as other specialized processing elements such as neural network accelerators. Second, machine learning algorithms have been pruned to the point where inferencing ... » read more

Research Bits: May 28


Nanofluidic memristive neural networks Engineers from EPFL developed a functional nanofluidic memristive device that relies on ions, rather than electrons and holes, to compute and store data. “Memristors have already been used to build electronic neural networks, but our goal is to build a nanofluidic neural network that takes advantage of changes in ion concentrations, similar to living... » read more

High-Level Synthesis Propels Next-Gen AI Accelerators


Everything around you is getting smarter. Artificial intelligence is not just a data center application but will be deployed in all kinds of embedded systems that we interact with daily. We expect to talk to and gesture at them. We expect them to recognize and understand us. And we expect them to operate with just a little bit of common sense. This intelligence is making these systems not just ... » read more

Fundamental Issues In Computer Vision Still Unresolved


Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that ... » read more

Research Bits: April 30


Sound waves in optical neural networks Researchers from the Max Planck Institute for the Science of Light and Massachusetts Institute of Technology found a way to build reconfigurable recurrent operators based on sound waves for photonic machine learning. They used light to create temporary acoustic waves in an optical fiber, which manipulate subsequent computational steps of an optical rec... » read more

← Older posts