Manufacturing Bits: April 21

Memristors reemerge; domain walls; hyper-dimensional nets.

popularity

Memristors reappear
The University of Massachusetts Amherst has taken a step towards of the realization of neuromorphic computing–it has devised bio-voltage memristors based on protein nanowires.

In neuromorphic computing, the idea is to bring the memory closer to the processing tasks to speed up a system. For this, the industry is attempting to replicate the brain in silicon. The goal is to mimic the way that information is moving from one group of neurons to another using precisely-timed pulses.

Neuromorphic computing is not the only way to bring the memory closer to the processing tasks in systems. There are also several traditional software and hardware approaches as well. All technologies are trying to handle various workloads in systems using machine learning. A subset of AI, machine learning utilizes a neural network to crunch data and identify patterns. It matches certain patterns and learns which of those attributes are important.

Meanwhile, in 2008, Hewlett-Packard developed one type of device called the memristor. A form of ReRAM, a memristor is a passive two-terminal electronic device. In memristance, if the flow of a charge is stopped by turning off the applied voltage, this component will remember its last resistance.

The industry is still working on memristors and related types of memory. But memristors never lived up to their promises and are still on the drawing board. It more or less flopped.

In the latest development in the arena, the University of Massachusetts Amherst has demonstrated a diffusive memristor based on protein nanowires. The nanowire itself was harvested from the bacterium Geobacter sulfurreducens, which functions at the biological voltages of 40-100mV, according to researchers in Nature Communications, a technology journal.

Geobacter sulfurreducens is a rod-shaped microbe. It is a type of bacteria that is able to conduct levels of electricity.

Using this microbe, researchers developed two memristor devices—planar and vertical. In the planar version, two gold electrodes were developed and separately situated on a substrate. A thin protein nanowire structure connected the two electrodes.

In the vertical version, a protein nanowire was sandwiched between two electrodes. In either case, memristors reached neurological voltages, according to researchers. “This is the first time that a device can function at the same voltage level as the brain. People probably didn’t even dare to hope that we could create a device that is as power-efficient as the biological counterparts in a brain, but now we have realistic evidence of ultra-low power computing capabilities. It’s a concept breakthrough and we think it’s going to cause a lot of exploration in electronics that work in the biological voltage regime,” said Jun Yao, a researcher at the University of Massachusetts Amherst.

“You can modulate the conductivity, or the plasticity of the nanowire-memristor synapse so it can emulate biological components for brain-inspired computing. Compared to a conventional computer, this device has a learning capability that is not software-based,” Yao said.

Magnetic domain walls
The University of Texas at Austin has developed another approach for neuromorphic computing–magnetic domain wall racetracks.

Researchers developed an array of magnetic nanowires based on magnetic tunnel junction (MTJ) devices. Used in STT-MRAM, an MTJ is a component. It consists of two ferromagnets separated by a thin insulator. In operation, the two magnetizations of the ferromagnetic films can be switched by an external magnetic field.

An array of MTJs resembles a magnetic domain wall, which can be manipulated in magnetic wires by applying a current in an adjacent metal, according to a the Massachusetts Institute of Technology (MIT).

In effect, the magnetic nanowires act as artificial neurons. In the lab, researchers from the University of Texas demonstrated the lateral inhibition behavior in an array of 1,000 MTJ neurons.

“Right now, the methods for training your neural networks are very energy-intensive,” said Jean Anne Incorvia, an assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering at the University of Texas at Austin. “What our work can do is help reduce the training effort and energy costs.”

Hyper-dimensional nets
The U.S. Defense Advanced Research Projects Agency (DARPA) recently issued submissions of basic or applied research in the emerging field of Hyper-Dimensional Data Enabled Neural Networks (HyDDENN).

Deep neural networks (DNNs) are becoming complex and rely on multiply and accumulate (MAC) operations. A DNN requires tens of billions of MAC operations to carry out one inference, according to DARPA. “As a result, the accuracy of DNN is fundamentally limited by available MAC resources,” according to DARPA.

“The HyDDENN program seeks new data enabled neural network (NN) architectures to break the reliance on large MAC-based DNNs,” according to the agency. “HyDDENN will explore and develop innovative data representations with shallow NN architectures based on efficient, non-MAC, digital compute primitives to enable highly accurate and energy efficient AI for DoD edge systems.”



Leave a Reply


(Note: This name will be displayed publicly)