SW/HW Codesign For CXL Memory Disaggregation In Billion-Scale Nearest Neighbor Search (KAIST)


A technical paper titled “Bridging Software-Hardware for CXL Memory Disaggregation in Billion-Scale Nearest Neighbor Search” was published by researchers at the Korea Advanced Institute of Science and Technology (KAIST) and Panmnesia. Abstract: "We propose CXL-ANNS, a software-hardware collaborative approach to enable scalable approximate nearest neighbor search (ANNS) services. To this e... » read more

Modeling And Analyzing Open-Source SoCs For Low-Power Cyber-Physical Systems


A technical paper titled “TOP: Towards Open & Predictable Heterogeneous SoCs” was published by researchers at University of Bologna, ETH Zurich, and University of California San Diego. Abstract: "Ensuring predictability in modern real-time Systems-on-Chip (SoCs) is an increasingly critical concern for many application domains such as automotive, robotics, and industrial automation. An... » read more

Evaluation of Cache Replacement Policies Using Various Typical Simulation Approaches


A technical paper titled “Improving the Representativeness of Simulation Intervals for the Cache Memory System” was published by researchers at Complutense University of Madrid, imec, and KU Leuven. Abstract: "Accurate simulation techniques are indispensable to efficiently propose new memory or architectural organizations. As implementing new hardware concepts in real systems is often not... » read more

Enabling Beyond-Bound Decoding For DRAM By Unraveling Reed-Solomon Codes


A technical paper titled “Unraveling codes: fast, robust, beyond-bound error correction for DRAM” was published by researchers at Rambus. Abstract: "Generalized Reed-Solomon (RS) codes are a common choice for efficient, reliable error correction in memory and communications systems. These codes add 2t extra parity symbols to a block of memory, and can efficiently and reliably correct up ... » read more

Guidelines For A Single-Nanometer Magnetic Tunnel Junction (MTJ)


A technical paper titled “Single-nanometer CoFeB/MgO magnetic tunnel junctions with high-retention and high-speed capabilities” was published by researchers at Tohoku University, Université de Lorraine, and Inamori Research Institute for Science. Abstract: "Making magnetic tunnel junctions (MTJs) smaller while meeting performance requirements is critical for future electronics with spin-... » read more

A New Phase-Change Memory For Processing Large Amounts Of Data 


A technical paper titled “Novel nanocomposite-superlattices for low energy and high stability nanoscale phase-change memory” was published by researchers at Stanford University, TSMC, NIST, University of Maryland, Theiss Research and Tianjin University. Abstract: "Data-centric applications are pushing the limits of energy-efficiency in today’s computing systems, including those based on... » read more

Heterogeneous Integration of Graphene and Hafnium Oxide Memristors Using Pulsed-Laser Deposition


A technical paper titled “Heterogeneous Integration of Graphene and HfO2 Memristors” was published by researchers at Forschungszentrum Jülich, Jožef Stefan Institute, and Jülich-Aachen Research Alliance (JARA-FIT). Abstract: "The past decade has seen a growing trend toward utilizing (quasi) van der Waals growth for the heterogeneous integration of various materials for advanced electro... » read more

Overview Of Spin-Orbit Torque Vs. Spin-Transfer Torque For MRAM Devices 


A technical paper titled “Perspectives on field-free spin-orbit torque devices for memory and computing applications” was published by researchers at Northwestern University. Abstract: "The emergence of embedded magnetic random-access memory (MRAM) and its integration in mainstream semiconductor manufacturing technology have created an unprecedented opportunity for engineering computing s... » read more

CiM Integration For ML Inference Acceleration


A technical paper titled “WWW: What, When, Where to Compute-in-Memory” was published by researchers at Purdue University. Abstract: "Compute-in-memory (CiM) has emerged as a compelling solution to alleviate high data movement costs in von Neumann machines. CiM can perform massively parallel general matrix multiplication (GEMM) operations in memory, the dominant computation in Machine Lear... » read more

Training Large LLM Models With Billions To Trillion Parameters On ORNL’s Frontier Supercomputer


A technical paper titled “Optimizing Distributed Training on Frontier for Large Language Models” was published by researchers at Oak Ridge National Laboratory (ORNL) and Universite Paris-Saclay. Abstract: "Large language models (LLMs) have demonstrated remarkable success as foundational models, benefiting various downstream applications through fine-tuning. Recent studies on loss scaling ... » read more

← Older posts Newer posts →