Manufacturing Bits: April 23

Sorting nuclei; in-memory FeFETs; neural MCM nets.

popularity

Sorting nuclei
CERN and GSI Darmstadt have begun testing the first of two giant magnets that will serve as part of one of the largest and most complex accelerator facilities in the world.

CERN, the European Organization for Nuclear Research, recently obtained two magnets from GSI. The two magnets weigh a total of 27 tons. About 60 more magnets will follow over the next five years.

These magnets are intended for GSI’s new particle separator (Super-FRS), a key component of the Facility for Antiproton and Ion Research (FAIR). Based in Darmstadt, Germany, FAIR involves an underground storage ring accelerator with a circumference of 1,100 meters. The ring also includes experimental stations. The construction work began in 2017 with plans to complete the FAIR project by 2025.

The ring can accelerate the ions of all elements in the periodic table to speeds as high as 99% of the speed of light. The magnets, which keep the ions on their paths, are cooled to -269°C by means of liquid helium.

FAIR is divided into four experiment pillars: NUSTAR, CBM, PANDA, APPA.

NUSTAR (Nuclear Structure, Astrophysics and Reaction) is aimed to study the nuclear reactions that occur inside stars. This will make use of the Super-FRS, which will sort out and filter exotic nuclei according to their charges and masses.

CBM (Compressed Baryonic Matter) will explore the collision of atomic nuclei at high speeds. PANDA (Particle magic with antimatter) is aimed to understand the mass of matter and antimatter. APPA (Atomic, Plasma Physics and Applications) will explore atoms and macroscopic effects in materials or tissues.

Symposia on VLSI
The 2019 Symposia on VLSI Technology & Circuits is slated from June 9-14, 2019, in Kyoto, Japan.

There are several papers at the event. In one paper, Toshiba will discuss its latest technology for in-memory computing, which has created a buzz in the industry. The idea behind this technology is to bring the memory closer to the processing tasks to speed up the system.

In the VLSI paper, meanwhile, Toshiba will disclose ferroelectric tunnel junction (FTJ) memristors with analog switching capabilities. Ferroelectrics FETs (FeFETs) are also creating interest in the industry.

Toshiba’s technology, which is arranged in a selector-less crossbar, enables an analog in-memory reinforcement learning (RL) system capable of learning behavior policies via a hardware-friendly algorithm. “The authors will show that commonly undesirable stochastic conductance switching is actually, in moderation, a beneficial property which promotes policy finding via a process akin to random search,” according to an abstract of the paper from Toshiba. “They experimentally demonstrate path-finding based on reinforcement, and solve a standard control problem of balancing a pole on a cart via simulation, outperforming similar deterministic RL systems.”

In another paper, Nvidia will present a scalable deep neural network (DNN) accelerator. The device consists of 36 chips connected in a mesh network on a multi-chip-module (MCM) using ground-referenced signaling.

The MCM enables flexible scaling for inference in mobile to data center domains. “The 16nm prototype achieves 1.29 TOPS/mm2, 0.11pJ/op, 4.01TOPS peak performance for a 1-chip system, and 127.8 peak TOPS and 2615 images/s ResNet50 inference for a 36-chip system,” according to an abstract from the paper.

Separately, TSMC will reveal more details about its chiplet strategy, dubbed System on Integrated Chips (SoIC).

With chiplets, you have a menu of modular chips, or chiplets, in a library. Then, you assemble chiplets in a package and connect them using a die-to-die interconnect scheme. Government agencies, industry groups and individual companies are beginning to rally around various chiplet models, setting the stage for complex chips that are quicker and cheaper to build using standardized interfaces and components.

TSMC, meanwhile, will present a dual-chiplet high-performance computing (HPC) processor implemented at 7nm with CoWoS technology. “Each chiplet has 4 ARM Cortex-A72 cores operating at 4GHz at turbo voltage condition, and the on-die inter-core mesh interconnect operates above 4GHz,” according to an abstract from the paper. “The inter-chiplet connection interface, called Low-Voltage-InPackage-INterCONnect (LIPINCON), provides 0.56pJ/bit power efficiency, 1.6Tb/s/mm2 bandwidth density, and 320GB/s bandwidth.”

At the event, Intel and Renesas will present papers on next-generation embedded memory technologies. Recently, Intel introduced an embedded MRAM technology for its 22nm finFET process. At VLSI, Intel will present a paper on embedded RRAM on its 22nm finFET technology, dubbed 22FFL. Intel has demonstrated a 10[4] cycle endurance with an 85℃ 10-year retention on 7.2Mbit arrays.

In a separate paper, Renesas will put a new twist on its existing embedded memory technology, which is based on NOR. Renesas will present a 1.5MB 2T-MONOS eFlash macro fabricated with a 65nm Silicon-on-Thin-Box (SOTB) technology. The technology achieves 0.22pJ/bit read energy with a 64MHz read access, which is low enough to utilize energy harvesting technologies as an energy source.



Leave a Reply


(Note: This name will be displayed publicly)