Manufacturing Bits: Dec. 16

Imec-Leti alliance; analog computing; spiking memories.

popularity

Imec-Leti alliance
At the recent IEEE International Electron Devices Meeting (IEDM), Imec and Leti announced plans to collaborate in select areas.

The two R&D organizations plan to collaborate in two areas—artificial intelligence (AI) and quantum computing. Imec and Leti have been separately working on AI technologies based on various next-generation memory architectures. Both entities have also been working on quantum computing.

By working together, the goal is to accelerate the development in these areas.

Both organizations will continue to work on other technologies independently.

Analog computing
Meanwhile, in a paper at IEDM, Imec presented a blueprint towards the development of an analog neural network device with 10000TOPS/W of performance.

Imec described a matrix-vector multiplier technology based on a novel Analog in-Memory Computing (AiMC) architecture. Based on 22nm technology, AiMC has an energy efficiency at least 10x better than digital implementations, according to Imec.

The R&D organization also evaluated three possible memory cell options for the architecture–IGZO-based 2T1C DRAM, SOT-MRAM, and phase-change memory (PCM).

The device is targeted for deep learning and related apps. A subset of AI, deep learning makes use of a neural network. This type of network crunches data and identifies patterns. It matches certain patterns and learns which of those attributes are important.

Many of these functions are taking place in the cloud. The goal is to bring some of these functions to the edge of computing. But this requires a new class of energy-efficient devices.

That’s where AiMC fits in. AiMC is a 10000TOPS/W analog matrix-vector multiplier for inference apps. The architecture makes use of 5-bit inputs and a 5-bit ADC. TOPS or tera operations per second is a performance metric. TOPS per watt describes the efficiency of the performance.

In the device, the weight matrix is stored in the compute cells, according to Imec. The number of weight levels depend on the memory element.

For the memory element or cell, Imec evaluated several memory types. The organization eliminated ReRAM and STT-MRAM. In ReRAM, the variation is too high when operated at high resistance levels, according to Imec. STT-MRAM didn’t have a high enough resistance level, according to Imec.

Other memory types met the requirements, however. “Three memory device concepts are discussed that meet the requirements imposed by the presented blueprint: IGZO-based 2T1C DRAM, SOT-MRAM and projection PCM with separated write path,” said Stefan Cosemans from Imec in the paper. Others contributed to the work.

For IGZO-based 2T1C DRAM, “the extremely low leakage maintains the state for a sufficiently long time, and the lower mobility reduces the cell current to the desired level. This cell can be stacked in a 3D monolithic way in the BEOL, leaving the FEOL available for peripheral circuits, enabling a very small footprint,” Cosemans said in the paper.

SOT-MRAM “is shown that the low on/off ratio of the cell is a disadvantage but not a showstopper,” Cosemans said. “PCM with projection layer solves part of the resistance drift that plagues normal PCM, at the cost of a strongly reduced on/off window.”

Spiking memories
At IEDM, CEA-Leti described a spiking neural network (SNN) that combines analog neurons and ReRAM-based synapses.

Based on a 130nm CMOS process, the device has demonstrated a classification accuracy at 84% with a 3.6 pJ energy dissipation per spike.

The device is designed to perform MNIST classification. Modified National Institute of Standards and Technology (MNIST) is a database. It consists of handwritten digits, which are used to train image processing and machine learning systems.

Spiking neural networks are composed of neurons. They communicate by emitting spikes, which are discrete events that take place at a point in time. These networks promise to reduce required computational power.

Based on a SNN topology, Leti’s device consists of a single-layer, fully-connected architecture. In the device, single-level cell ReRAMs are used to create the synapses. ReRAMs work by changing the resistance across a dielectric solid-state material.

A so-called “Integrate and Fire (IF)” analog neuron design was also implemented. “The entire network is integrated on-chip,” said Alexandre Valentian of Leti. “No part is emulated or replaced by an external circuit, as in some other projects. It even was used in a live demo where users could draw digits with their finger on a tablet and it is classified after conversion into a spike train.”



Leave a Reply


(Note: This name will be displayed publicly)