A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog IMC (IBM and ETH Zurich)


A technical paper titled “A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing” was published by researchers at IBM Research Europe and IIS-ETH Zurich. Abstract: "Analog In-Memory Computing (AIMC) is an emerging technology for fast and energy-efficient Deep Learning (DL) inference. However, a certain amount of digital post-processing is requ... » read more

Simulation Of A Kicked Ising Quantum System On The Heavy Hexagon Lattice


A technical paper titled “Efficient Tensor Network Simulation of IBM’s Eagle Kicked Ising Experiment” was published by researchers at the Flatiron Institute and New York University. Abstract: "We report an accurate and efficient classical simulation of a kicked Ising quantum system on the heavy hexagon lattice. A simulation of this system was recently performed on a 127-qubit quantum pr... » read more

Generating And Evaluating HW Verification Assertions From Design Specifications Via Multi-LLMs


A technical paper titled “AssertLLM: Generating and Evaluating Hardware Verification Assertions from Design Specifications via Multi-LLMs” was published by researchers at Hong Kong University of Science and Technology. Abstract: "Assertion-based verification (ABV) is a critical method for ensuring design circuits comply with their architectural specifications, which are typically describe... » read more

LLM Inference on GPUs (Intel)


A technical paper titled “Efficient LLM inference solution on Intel GPU” was published by researchers at Intel Corporation. Abstract: "Transformer based Large Language Models (LLMs) have been widely used in many fields, and the efficiency of LLM inference becomes hot topic in real applications. However, LLMs are usually complicatedly designed in model structure with massive operations and... » read more

CiM Integration For ML Inference Acceleration


A technical paper titled “WWW: What, When, Where to Compute-in-Memory” was published by researchers at Purdue University. Abstract: "Compute-in-memory (CiM) has emerged as a compelling solution to alleviate high data movement costs in von Neumann machines. CiM can perform massively parallel general matrix multiplication (GEMM) operations in memory, the dominant computation in Machine Lear... » read more

Training Large LLM Models With Billions To Trillion Parameters On ORNL’s Frontier Supercomputer


A technical paper titled “Optimizing Distributed Training on Frontier for Large Language Models” was published by researchers at Oak Ridge National Laboratory (ORNL) and Universite Paris-Saclay. Abstract: "Large language models (LLMs) have demonstrated remarkable success as foundational models, benefiting various downstream applications through fine-tuning. Recent studies on loss scaling ... » read more

Chiplet Heterogeneity And Advanced Scheduling With Pipelining


A technical paper titled “Inter-Layer Scheduling Space Exploration for Multi-model Inference on Heterogeneous Chiplets” was published by researchers at University of California Irvine. Abstract: "To address increasing compute demand from recent multi-model workloads with heavy models like large language models, we propose to deploy heterogeneous chiplet-based multi-chip module (MCM)-based... » read more

Radar-Based SLAM Algorithm (Ulm University)


A technical paper titled “Simultaneous Localization and Mapping (SLAM) for Synthetic Aperture Radar (SAR) Processing in the Field of Autonomous Driving” was published by researchers at Ulm University. Abstract: "Autonomous driving technology has made remarkable progress in recent years, revolutionizing transportation systems and paving the way for safer and more efficient journeys. One of... » read more

Efficient LLM Inference With Limited Memory (Apple)


A technical paper titled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory” was published by researchers at Apple. Abstract: "Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their intensive computational and memory requirements present challenges, especially for device... » read more

SystemC-based Power Side-Channel Attacks Against AI Accelerators (Univ. of Lubeck)


A new technical paper titled "SystemC Model of Power Side-Channel Attacks Against AI Accelerators: Superstition or not?" was published by researchers at Germany's University of Lubeck. Abstract "As training artificial intelligence (AI) models is a lengthy and hence costly process, leakage of such a model's internal parameters is highly undesirable. In the case of AI accelerators, side-chann... » read more

← Older posts Newer posts →