What’s Next In Neuromorphic Computing


To integrate devices into functioning systems, it's necessary to consider what those systems are actually supposed to do. Regardless of the application, [getkc id="305" kc_name="machine learning"] tasks involve a training phase and an inference phase. In the training phase, the system is presented with a large dataset and learns how to "correctly" analyze it. In supervised learning, the data... » read more

New Nodes, Materials, Memories


Ellie Yieh, vice president and general manager of Advanced Product Technology Development at [getentity id="22817" e_name="Applied Materials"], and head of the company's Maydan Technology Center, sat down with Semiconductor Engineering to talk about challenges, changes and solutions at advanced nodes and with new applications. What follows are excerpts of that conversation. SE: How far can w... » read more

System Bits: Jan. 30


Lab-in-the-cloud Although Internet-connected smart devices have penetrated numerous industries and private homes, the technological phenomenon has left the research lab largely untouched, according to MIT researchers. Spreadsheets, individual software programs, and even pens and paper remain standard tools for recording and sharing data in academic and industry labs — until now. TetraScie... » read more

System Bits: Jan. 23


Artificial synapse for “brain-on-a-chip” portable AI devices In the emerging field of neuromorphic computing, researchers are attempting to design computer chips that work like the human brain, which, instead of carrying out computations based on binary, on/off signaling like digital chips do today, the elements of a brain-on-a-chip would work in an analog fashion, exchanging a gradient of... » read more

3D Neuromorphic Architectures


Matrix multiplication is a critical operation in conventional neural networks. Each node of the network receives an input signal, multiplies it by some predetermined weight, and passes the result to the next layer of nodes. While the nature of the signal, the method used to determine the weights, and the desired result will all depend on the specific application, the computational task is simpl... » read more

Toward Neuromorphic Designs


Part one of this series considered the mechanisms of learning and memory in biological brains. Each neuron has many fibers, which connect to adjacent neurons at synapses. The concentration of ions such as potassium and calcium inside the cell is different from the concentration outside. The cellular membrane thus serves as a capacitor. When a stimulus is received, the neuron releases neur... » read more

Verifying AI, Machine Learning


[getperson id="11306" comment="Raik Brinkmann"], president and CEO of [getentity id="22395" e_name="OneSpin Solutions"], sat down to talk about artificial intelligence, machine learning, and neuromorphic chips. What follows are excerpts of that conversation. SE: What's changing in [getkc id="305" kc_name="machine learning"]? Brinkmann: There’s a real push toward computing at the edge. ... » read more

Terminology Beyond von Neumann


Neural networks. Neuromorphic computing. Non-von Neumann architectures. As I’ve been researching my series on neuromorphic computing, I’ve encountered a lot of new terminology. It hasn’t always been easy to figure out exactly what’s being discussed. This explainer attempts to both clarify the terms used in my own articles and to help others sort through the rapidly growing literature in... » read more

What Happened To ReRAM?


Resistive RAM (ReRAM), one of a handful of next-generation memories under development, is finally gaining traction after years of setbacks. Fujitsu and Panasonic are jointly ramping up a second-generation ReRAM device. In addition, Crossbar is sampling a 40nm ReRAM technology, which is being made on a foundry basis by China’s SMIC. And not to be outdone, TSMC and UMC recently put ReRAM on ... » read more

What’s New At Hot Chips


By Jeff Dorsch & Ed Sperling Machine learning, artificial intelligence and neuromorphic computing took center stage at Hot Chips 2017 this week, a significant change from years past where the focus was on architectures that addressed improvements in speed and performance for standard compute problems. What is clear, given the focus of presentations, is that the bleeding edge of comput... » read more

← Older posts Newer posts →