Polynesia, A Novel Hardware/Software Cooperative Design for In-Memory HTAP Databases


A team of researchers from ETH Zurich, Google and Univ. of Illinois Urbana-Champaign recently published a technical paper titled "Polynesia: Enabling High-Performance and Energy-Efficient Hybrid Transactional/Analytical Databases with Hardware/Software Co-Design". Abstract (partial) "We propose Polynesia, a hardware–software co-designed system for in-memory HTAP [hybrid transactional/anal... » read more

Technical Paper Round-Up: July 5


New technical papers added to Semiconductor Engineering’s library this week. [table id=36 /] Semiconductor Engineering is in the process of building this library of research papers. Please send suggestions (via comments section below) for what else you’d like us to incorporate. If you have research papers you are trying to promote, we will review them to see if they are a good fit for... » read more

“All-in-One” 8×8 Array of Low-Power & Bio-inspired Crypto Engines w/IoT Edge Sensors Based on 2D Memtransistors


New technical paper titled "All-in-one, bio-inspired, and low-power crypto engines for near-sensor security based on two-dimensional memtransistors" from researchers at Penn State University. Abstract: "In the emerging era of the internet of things (IoT), ubiquitous sensors continuously collect, consume, store, and communicate a huge volume of information which is becoming increasingly vuln... » read more

OTA On-Chip Computing That Conquers A Bottleneck In Wired NoC Architectures


New research paper titled "Wireless On-Chip Communications for Scalable In-memory Hyperdimensional Computing" from researchers at IBM Research, Zurich Switzerland and Universitat Politecnica de Catalunya, Barcelona, Spain Abstract: "Hyperdimensional computing (HDC) is an emerging computing paradigm that represents, manipulates, and communicates data using very long random vectors (aka hyp... » read more

Simulation Framework to Evaluate the Feasibility of Large-scale DNNs based on CIM Architecture & Analog NVM


Technical paper titled "Accuracy and Resiliency of Analog Compute-in-Memory Inference Engines" from researchers at UCLA. Abstract "Recently, analog compute-in-memory (CIM) architectures based on emerging analog non-volatile memory (NVM) technologies have been explored for deep neural networks (DNNs) to improve scalability, speed, and energy efficiency. Such architectures, however, leverage ... » read more

Analog Edge Inference with ReRAM


Abstract "As the demands of big data applications and deep learning continue to rise, the industry is increasingly looking to artificial intelligence (AI) accelerators. Analog in-memory computing (AiMC) with emerging nonvolatile devices enable good hardware solutions, due to its high energy efficiency in accelerating the multiply-and-accumulation (MAC) operation. Herein, an Applied Materials... » read more

Nonvolatile Capacitive Crossbar Array for In-Memory Computing


Abstract "Conventional resistive crossbar array for in-memory computing suffers from high static current/power, serious IR drop, and sneak paths. In contrast, the “capacitive” crossbar array that harnesses transient current and charge transfer is gaining attention as it 1) only consumes dynamic power, 2) has no DC sneak paths and avoids severe IR drop (thus, selector-free), and 3) can be f... » read more

A crossbar array of magnetoresistive memory devices for in-memory computing


Samsung has demonstrated the world’s first in-memory computing technology based on MRAM. Samsung has a paper on the subject in Nature. This paper showcases Samsung’s effort to merge memory and system semiconductors for next-generation artificial intelligence (AI) chips. Abstract "Implementations of artificial neural networks that borrow analogue techniques could potentially offer low-po... » read more

Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices


Abstract:  "Recent advances in deep learning have been driven by ever-increasing model sizes, with networks growing to millions or even billions of parameters. Such enormous models call for fast and energy-efficient hardware accelerators. We study the potential of Analog AI accelerators based on Non-Volatile Memory, in particular Phase Change Memory (PCM), for software-equivalent accurate i... » read more

Enabling Training of Neural Networks on Noisy Hardware


Abstract:  "Deep neural networks (DNNs) are typically trained using the conventional stochastic gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train networks on non-ideal analog hardware composed of resistive device arrays with non-symmetric conductance modulation characteristics. Recently we proposed a new algorithm, the Tiki-Taka algorithm, that overcomes t... » read more

← Older posts Newer posts →