Physics Simulation With Graph Neural Networks Targeting Mobile


By Máté Stodulka and Tomas Zilhao Borges The demand for immersive, realistic graphics in mobile gaming and AR or VR is pushing the limits of mobile hardware. Achieving lifelike simulations of fluids, cloth, and other materials historically requires intensive mathematical computations. While these traditional methods yield highly accurate results, they have been too resource-heavy to run re... » read more

The Optical Implementation of Backpropagation (Oxford, Lumai)


A technical paper titled "Training neural networks with end-to-end optical backpropagation" was published by researchers at University of Oxford and Lumai Ltd. Abstract "Optics is an exciting route for the next generation of computing hardware for machine learning, promising several orders of magnitude enhancement in both computational speed and energy efficiency. However, reaching the full... » read more

What Scares Chip Engineers About Generative AI


Experts At The Table: LLMs and other generative AI programs are a long way away from being able to design entire chips on their own from scratch, but the emergence of the tech has still raised some genuine concerns. Semiconductor Engineering sat down with a panel of experts, which included Rod Metcalfe, product management group director at Cadence; Syrus Ziai, vice-president of engineering at E... » read more

Real-Time Low Light Video Enhancement Using Neural Networks On Mobile


Video conferencing is a ubiquitous tool for communication, especially for remote work and social interactions. However, it is not always a straightforward plug and play experience, as adjustments may be needed to ensure a good audio and video setup. Lighting is one such factor that can be tricky to get right. A nicely illuminated video feed looks presentable in a meeting, but on the other hand,... » read more

Characteristics and Potential HW Architectures for Neuro-Symbolic AI


A new technical paper titled "Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture" was published by researchers at Georgia Tech, UC Berkeley, and IBM Research. Abstract: "The remarkable advancements in artificial intelligence (AI), primarily driven by deep neural networks, are facing challenges surrounding unsustainable computational trajectories, li... » read more

In Situ Backpropagation Strategy That Progressively Updates Neural Network Layers Directly in HW (TU Eindhoven)


A new technical paper titled "Hardware implementation of backpropagation using progressive gradient descent for in situ training of multilayer neural networks" was published by researchers at Eindhoven University of Technology. Abstract "Neural network training can be slow and energy-expensive due to the frequent transfer of weight data between digital memory and processing units. Neuromorp... » read more

Leveraging Machine Learning in Semiconductor Yield Analysis


Searching through wafer maps looking for spatial patterns is not only a very time-consuming task to be done manually, it’s also prone to human oversight and error, and nearly impossible in a large fab where there are thousands of wafers a day being processed. We developed a tool that applies automatic spatial pattern detection algorithms using ML, parametrizing pattern recognition and clas... » read more

Lower Energy, High Performance LLM on FPGA Without Matrix Multiplication


A new technical paper titled "Scalable MatMul-free Language Modeling" was published by UC Santa Cruz, Soochow University, UC Davis, and LuxiTech. Abstract "Matrix multiplication (MatMul) typically dominates the overall computational cost of large language models (LLMs). This cost only grows as LLMs scale to larger embedding dimensions and context lengths. In this work, we show that MatMul... » read more

MCU Changes At The Edge


Microcontrollers are becoming a key platform for processing machine learning at the edge due to two significant changes. First, they now can include multiple cores, including some for high performance and others for low power, as well as other specialized processing elements such as neural network accelerators. Second, machine learning algorithms have been pruned to the point where inferencing ... » read more

Research Bits: May 28


Nanofluidic memristive neural networks Engineers from EPFL developed a functional nanofluidic memristive device that relies on ions, rather than electrons and holes, to compute and store data. “Memristors have already been used to build electronic neural networks, but our goal is to build a nanofluidic neural network that takes advantage of changes in ion concentrations, similar to living... » read more

← Older posts