MEMprop: Gradient-based Learning To Train Fully Memristive SNNs


New technical paper titled "Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays" from IEEE researchers. Abstract "We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs). Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes. These spikes emitted by memristive dynamics are anal... » read more

Novel Spintronic Neuro-mimetic Device Emulating the LIF Neuron Dynamics w/High Energy Efficiency & Compact Footprints


New technical paper titled "Noise resilient leaky integrate-and-fire neurons based on multi-domain spintronic devices" from researchers at Purdue University. Abstract "The capability of emulating neural functionalities efficiently in hardware is crucial for building neuromorphic computing systems. While various types of neuro-mimetic devices have been investigated, it remains challenging to... » read more

Neuromorphic Chips & Power Demands


Research paper titled "A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware," from researchers at Graz University of Technology and Intel Labs. Abstract "Spike-based neuromorphic hardware holds the promise to provide more energy efficient implementations of Deep Neural Networks (DNNs) than standard hardware such as GPUs. But this requires to understand how D... » read more

Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems


New academic paper from Washington State University, supported by a grant from the National Science Foundation. Abstract: "Spiking neural network (SNN) in future neuromorphic architectures requires hardware devices to be not only capable of emulating fundamental functionalities of biological synapse such as spike-timing dependent plasticity (STDP) and spike-rate dependent plasticity (SRDP),... » read more

Always-On Sub-Microwatt Spiking Neural Network Based on Spike-Driven Clock- and Power-Gating for an Ultra-Low-Power Intelligent Device


Abstract: "This paper presents a novel spiking neural network (SNN) classifier architecture for enabling always-on artificial intelligent (AI) functions, such as keyword spotting (KWS) and visual wake-up, in ultra-low-power internet-of-things (IoT) devices. Such always-on hardware tends to dominate the power efficiency of an IoT device and therefore it is paramount to minimize its power diss... » read more

An Event-Driven and Fully Synthesizable Architecture for Spiking Neural Networks


Abstract:  "The development of brain-inspired neuromorphic computing architectures as a paradigm for Artificial Intelligence (AI) at the edge is a candidate solution that can meet strict energy and cost reduction constraints in the Internet of Things (IoT) application areas. Toward this goal, we present μBrain: the first digital yet fully event-driven without clock architecture, with co-lo... » read more

Benchmarking Highly Parallel Hardware for Spiking Neural Networks in Robotics


Abstract: "Animal brains still outperform even the most performant machines with significantly lower speed. Nonetheless, impressive progress has been made in robotics in the areas of vision, motion- and path planning in the last decades. Brain-inspired Spiking Neural Networks (SNN) and the parallel hardware necessary to exploit their full potential have promising features for robotic applica... » read more

Easier And Faster Ways To Train AI


Training an AI model takes an extraordinary amount of effort and data. Leveraging existing training can save time and money, accelerating the release of new products that use the model. But there are a few ways this can be done, most notably through transfer and incremental learning, and each of them has its applications and tradeoffs. Transfer learning and incremental learning both take pre... » read more

11 Ways To Reduce AI Energy Consumption


As the machine-learning industry evolves, the focus has expanded from merely solving the problem to solving the problem better. “Better” often has meant accuracy or speed, but as data-center energy budgets explode and machine learning moves to the edge, energy consumption has taken its place alongside accuracy and speed as a critical issue. There are a number of approaches to neural netw... » read more

Making Sense Of New Edge-Inference Architectures


New edge-inference machine-learning architectures have been arriving at an astounding rate over the last year. Making sense of them all is a challenge. To begin with, not all ML architectures are alike. One of the complicating factors in understanding the different machine-learning architectures is the nomenclature used to describe them. You’ll see terms like “sea-of-MACs,” “systolic... » read more

← Older posts