Manufacturing Bits: April 27

Next-gen neuromorphic computing; machine vision; earthquake detection.


Next-gen neuromorphic computing
The European Union (EU) has launched a new project to develop next-generation devices for neuromorphic computing systems.

The project, called MeM-Scales, plans to develop a novel class of algorithms, devices, and circuits that reproduce multi-timescale processing of biological neural systems. The results will be used to build neuromorphic computing systems that can process real-world sensory signals and data in real-time. These systems may reside on the edge of the network.

CEA-Leti is coordinating the program. Other members include Imec, IBM, University of Zurich, CSIS, CNR, SynSense and UOG.

Neuromorphic computing is one of many forms of AI. Today’s most common type of AI is called machine learning. Machine learning uses a neural network in a system, which crunches vast amounts of data and identify patterns. Then, the network matches certain patterns and learns which of those attributes are important. Many of today’s systems based on machine learning use traditional chip architectures like GPUs, SRAM and others.

The industry also has been working on a non-traditional approach called neuromorphic computing, which is still several years away from being realized. Neuromorphic computing may also use a neural network. The difference is the industry is attempting to replicate the brain in silicon. The goal is to mimic the way that information is processed using precisely-timed pulses.

There are many efforts here. Last year, for example, Leti presented a paper on the development of an integrated spiking neural network (SNN) chip using ReRAM, a next-generation memory type. Others are using analog-based nonvolatile memory approaches. SNNs incorporate the concept of time in a network.

The MeM-Scales project resembles the work in SNNs with a twist. It hopes to develop a new class of devices for the edge and other applications.

Today, PCs, smartphones and other systems generate a vast amount of data. A large percentage of the data is processed in servers in large data centers. These systems require fast and power-hungry chips.

For some time, the industry has been working on devices that can offload some of the processing and AI functions from the data center. These devices are used in systems, which would sit on the edge of the network. Edge devices include chips developed for the automotive market, as well as drones, security cameras, smartphones, smart doorbells and voice assistants, according to The Linley Group.

The MeM-Scales project will enable novel solutions for the edge. In the future, a large percentage of computing will be offloaded from central servers and delegated to small controllers and intelligent sensors on the edge of the network. “These IoT systems must be able to work reliably, without interruptions and with very low energy demands,” according to the group. “The project also will develop edge-computing processing systems for applications that do not require connectivity to the cloud.”

The MeM-Scales project hopes to develop novel memory and device technologies, such as ReRAM, thin-film transistor (TFT) technologies and others. These products will be fabricated to support on-chip learning over multiple timescales.

“The MeM-Scales project aims at lifting neuromorphic computing in analog spiking microprocessors to an entirely new level of performance,” said Elisa Vianello, manager of CEA-Leti’s AI program and the coordinator of the MeM-Scales project. “These cross-disciplinary efforts will lead to the fabrication of an innovative hardware/software platform as a basis for future products combining extreme power efficiency with robust cognitive computing capabilities. This new kind of computing technology will open new perspectives; for instance, for high-dimensional distributed environmental monitoring, implantable medical-diagnostic microchips, wearable electronics and human-computer interfaces.”

Machine vision
The University of California at Los Angeles (UCLA) and the California NanoSystems Institute (CNSI) have developed a new, single-pixel machine vision technology for machine learning applications.

The technology overcomes the inefficiencies of traditional machine vision systems.

Machine vision systems are used in a multitude of applications. In operation, these systems use optical-based components in order to take images of various events. Once the images are taken, the data is then processed. Using machine learning algorithms, the data is used to classify and identify objects.

This process is sometimes inefficient. In response, UCLA and CNSI devised a new machine vision solution. In the lab, researchers used the technology to classify handwritten digits.

“Unlike conventional optical components used in machine vision systems, we use diffractive layers that are composed of two-dimensional (2D) arrays of passive pixels, where the complex-valued transmission or reflection coefficients of individual pixels are independent learnable parameters that are optimized using a computer through deep learning and error backpropagation,” said Jingxi Li, a researcher at UCLA in Science Advances, a technology journal. Others contributed to the work.

“Using a plasmonic nanoantenna-based detector, we experimentally validated this single-pixel machine vision framework at terahertz spectrum to optically classify the images of handwritten digits by detecting the spectral power of the diffracted light at ten distinct wavelengths, each representing one class/digit,” Li said. “This single-pixel machine vision framework can also be extended to other spectral-domain measurement systems to enable new 3D imaging and sensing modalities integrated with diffractive network-based spectral encoding of information.”

Earthquake detection
Los Alamos National Laboratory has developed SeismoGen, a new machine-learning model that promises to improve earthquake detection.

SeismoGen is capable of generating high-quality synthetic seismic waveforms. The technique could save tedious and intensive manual labeling efforts in earthquake detection.

“To verify the efficacy of our generative model, we applied it to seismic field data collected in Oklahoma,” said Youzuo Lin, a computational scientist in Los Alamos National Laboratory’s Geophysics group and principal investigator of the project. “Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms.”

Leave a Reply

(Note: This name will be displayed publicly)