Research Bits: Jan. 24

Transistor-free compute-in-memory; neuromorphic computing with superconductors; stretchy synaptic transistor.

popularity

Transistor-free compute-in-memory

Researchers from the University of Pennsylvania, Sandia National Laboratories, and Brookhaven National Laboratory propose a transistor-free compute-in-memory (CIM) architecture to overcome memory bottlenecks and reduce power consumption in AI workloads.

“Even when used in a compute-in-memory architecture, transistors compromise the access time of data,” said Deep Jariwala, assistant professor in the department of electrical and systems engineering at University of Pennsylvania. “They require a lot of wiring in the overall circuitry of a chip and thus use time, space, and energy in excess of what we would want for AI applications. The beauty of our transistor-free design is that it is simple, small, and quick and it requires very little energy.”

The device uses scandium-alloyed aluminum nitride (AlScN), a semiconductor that allows for ferroelectric switching, the physics of which are faster and more energy efficient than alternative nonvolatile memory elements.

“One of this material’s key attributes is that it can be deposited at temperatures low enough to be compatible with silicon foundries,” said Troy Olsson, associate professor in the department of electrical and systems engineering at University of Pennsylvania. “Most ferroelectric materials require much higher temperatures. AlScN’s special properties mean our demonstrated memory devices can go on top of the silicon layer in a vertical hetero-integrated stack. Think about the difference between a multistory parking lot with a hundred-car capacity and a hundred individual parking spaces spread out over a single lot. Which is more efficient in terms of space? The same is the case for information and devices in a highly miniaturized chip like ours. This efficiency is as important for applications that require resource constraints, such as mobile or wearable devices, as it is for applications that are extremely energy intensive, such as data centers.”

The team suggests that the CIM ferrodiode may be able to perform up to 100 times faster than a conventional computing architecture. It supports on-chip storage, parallel search, and matrix multiplication acceleration.

“Let’s say that you have an AI application that requires a large memory for storage as well as the capability to do pattern recognition and search. Think self-driving cars or autonomous robots, which need to respond quickly and accurately to dynamic, unpredictable environments,” said Jariwala. “Using conventional architectures, you would need a different area of the chip for each function and you would quickly burn through the availability and space. Our ferrodiode design allows you to do it all in one place by simply changing the way you apply voltages to program it.”

When the team ran a simulation of a machine learning task through their chip, it performed with a comparable degree of accuracy to AI-based software running on a conventional CPU.

“This research is highly significant because it proves that we can rely on memory technology to develop chips that integrate multiple AI data applications in a way that truly challenges conventional computing technologies,” said Xiwen Liu, a Ph.D. candidate at University of Pennsylvania.

“It is important to realize that all of the AI computing that is currently done is software-enabled on a silicon hardware architecture designed decades ago,” added Jariwala. “This is why artificial intelligence as a field has been dominated by computer and software engineers. Fundamentally redesigning hardware for AI is going to be the next big game changer in semiconductors and microelectronics. The direction we are going in now is that of hardware and software co-design.”

Neuromorphic computing with superconductors

Researchers at the National Institute of Standards and Technology (NIST) built a circuit based on superconducting single-photon detectors that behaves similar to a biological synapse but uses single photons to transmit and receive signals.

“The computation in the NIST circuit occurs where a single-photon detector meets a superconducting circuit element called a Josephson junction. A Josephson junction is a sandwich of superconducting materials separated by a thin insulating film. If the current through the sandwich exceeds a certain threshold value, the Josephson junction begins to produce small voltage pulses called fluxons. Upon detecting a photon, the single-photon detector pushes the Josephson junction over this threshold and fluxons are accumulated as current in a superconducting loop. Researchers can tune the amount of current added to the loop per photon by applying a bias (an external current source powering the circuits) to one of the junctions. This is called the synaptic weight,” NIST explained.

Photograph of a NIST superconducting circuit that behaves like an artificial version of a synapse, a connection between nerve cells (neurons) in the brain. The labels show various components of the circuit and their functions. (Credit: S. Khan and B. Primavera/NIST)

The stored current serves as a form of short-term memory, providing a record of how many times the neuron produced a spike in the near past. The duration of this memory is set by the time it takes for the electric current to decay in the superconducting loops, which can vary from hundreds of nanoseconds to milliseconds, and possibly longer. As such, the hardware could be matched to problems occurring at different time scales, such as high-speed industrial control systems to comparatively slow real-time human interaction.

Longer-term memory can be achieved by setting different weights through changing the bias to the Josephson junctions, which could be used to make the networks programmable.

“We could use what we’ve demonstrated here to solve computational problems, but the scale would be limited,” said Jeff Shainline of NIST. “Our next goal is to combine this advance in superconducting electronics with semiconductor light sources. That will allow us to achieve communication between many more elements and solve large, consequential problems.”  The team is also exploring techniques to implement synaptic weighting in larger-scale neuromorphic chips.

Stretchy synaptic transistor

Researchers from Pennsylvania State University, University of Houston, Northwestern University, and Flexterra developed a stretchy synaptic transistor that mimics neurons in the brain and could be used for AI processing in wearables or robots.

“Mirroring the human brain, robots and wearable devices using the synaptic transistor can use its artificial neurons to ‘learn’ and adapt their behaviors,” said Cunjiang Yu, associate professor of engineering science and mechanics, of biomedical engineering, and of materials science and engineering at Penn State. “For example, if we burn our hand on a stove, it hurts, and we know to avoid touching it next time. The same results will be possible for devices that use the synaptic transistor, as the artificial intelligence is able to ‘learn’ and adapt to its environment.”

According to Yu, the artificial neurons in the device were designed to perform like neurons in the ventral tegmental area, a tiny segment of the human brain located in the uppermost part of the brain stem. “Unlike all other areas of the brain, neurons in the ventral tegmental area are capable of releasing both excitatory and inhibitory neurotransmitters at the same time,” Yu said. “By designing the synaptic transistor to operate with both synaptic behaviors simultaneously, fewer transistors are needed compared to conventional integrated electronics technology, which simplifies the system architecture and allows the device to conserve energy.”

Excitatory neurotransmitters trigger the activity of other neurons and are associated with enhancing memories, while inhibitory neurotransmitters reduce the activity of other neurons and are associated with weakening memories.

The researchers used stretchable bilayer semiconductor materials to fabricate the device, allowing it to stretch and twist while in use. “The transistor is mechanically deformable and functionally reconfigurable, yet still retains its functions when stretched extensively,” Yu said. “It can attach to a robot or wearable device to serve as their outermost skin.”



Leave a Reply


(Note: This name will be displayed publicly)