Memristor Crossbar Architecture for Encryption, Decryption and More


A new technical paper titled "Tunable stochastic memristors for energy-efficient encryption and computing" was published by researchers at Seoul National University, Sandia National Laboratories, Texas A&M University and Applied Materials. Abstract "Information security and computing, two critical technological challenges for post-digital computation, pose opposing requirement... » read more

In-Memory Computing: Techniques for Error Detection and Correction


A new technical paper titled "Error Detection and Correction Codes for Safe In-Memory Computations" was published by researchers at Robert Bosch, Forschungszentrum Julich, and Newcastle University. Abstract "In-Memory Computing (IMC) introduces a new paradigm of computation that offers high efficiency in terms of latency and power consumption for AI accelerators. However, the non-idealities... » read more

High-NA EUVL: Automated Defect Inspection Based on SEMI-SuperYOLO-NAS


A new technical paper titled "Towards Improved Semiconductor Defect Inspection for high-NA EUVL based on SEMI-SuperYOLO-NAS" was published by researchers at KU Leuven, imec, Ghent University, and SCREEN SPE. Abstract "Due to potential pitch reduction, the semiconductor industry is adopting High-NA EUVL technology. However, its low depth of focus presents challenges for High Volume Manufac... » read more

Optimizing Event-Based Neural Network Processing For A Neuromorphic Architecture


A new technical paper titled "Optimizing event-based neural networks on digital neuromorphic architecture: a comprehensive design space exploration" was published by imec, TU Delft and University of Twente. Abstract "Neuromorphic processors promise low-latency and energy-efficient processing by adopting novel brain-inspired design methodologies. Yet, current neuromorphic solutions still str... » read more

Verifying Hardware CWEs in RTL Designs Generated by GenAI


A new technical paper titled "All Artificial, Less Intelligence: GenAI through the Lens of Formal Verification" was published by researchers at Infineon Technologies. Abstract "Modern hardware designs have grown increasingly efficient and complex. However, they are often susceptible to Common Weakness Enumerations (CWEs). This paper is focused on the formal verification of CWEs in a dataset... » read more

Designing AI Hardware To Deal With Increasingly Challenging Memory Wall (UC Berkeley)


A new technical paper titled "AI and Memory Wall" was published by researchers at UC Berkeley, ICSI, and LBNL. Abstract "The availability of unprecedented unsupervised training data, along with neural scaling laws, has resulted in an unprecedented surge in model size and compute requirements for serving/training LLMs. However, the main performance bottleneck is increasingly shifting to memo... » read more

HW Implementation of Memristive ANNs


A new technical paper titled "Hardware implementation of memristor-based artificial neural networks" was published by KAUST, Universitat Autònoma de Barcelona, IBM Research, USC, University of Michigan and others. Abstract: "Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units oper... » read more

Transformer Model Based Clustering Methodology For Standard Cell Layout Automation (Nvidia)


A new technical paper titled "Novel Transformer Model Based Clustering Method for Standard Cell Design Automation" was published by researchers at Nvidia. Abstract "Standard cells are essential components of modern digital circuit designs. With process technologies advancing beyond 5nm, more routability issues have arisen due to the decreasing number of routing tracks (RTs), increasing numb... » read more

LLMs For EDA, HW Design and Security


A new technical paper titled "Hardware Phi-1.5B: A Large Language Model Encodes Hardware Domain Specific Knowledge" was published by researchers at Kansas State University, University of Science and Technology of China, Michigan Technological University, Washington University in St. Louis and Silicon Assurance. Abstract "In the rapidly evolving semiconductor industry, where research, design... » read more

Efficient Streaming Language Models With Attention Sinks (MIT, Meta, CMU, NVIDIA)


A technical paper titled “Efficient Streaming Language Models with Attention Sinks” was published by researchers at Massachusetts Institute of Technology (MIT), Meta AI, Carnegie Mellon University (CMU), and NVIDIA. Abstract: "Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses tw... » read more

← Older posts