Hardware-Based Methodology To Protect AI Accelerators


A technical paper titled “A Unified Hardware-based Threat Detector for AI Accelerators” was published by researchers at Nanyang Technological University and Tsinghua University. Abstract: "The proliferation of AI technology gives rise to a variety of security threats, which significantly compromise the confidentiality and integrity of AI models and applications. Existing software-based so... » read more

A Survey Of Recent Advances In Spiking Neural Networks From Algorithms To HW Acceleration


A technical paper titled “Recent Advances in Scalable Energy-Efficient and Trustworthy Spiking Neural networks: from Algorithms to Technology” was published by researchers at Intel Labs, University of California Santa Cruz, University of Wisconsin-Madison, and University of Southern California. Abstract: "Neuromorphic computing and, in particular, spiking neural networks (SNNs) have becom... » read more

A Framework For Improving Current Defect Inspection Techniques For Advanced Nodes


A technical paper titled “Improved Defect Detection and Classification Method for Advanced IC Nodes by Using Slicing Aided Hyper Inference with Refinement Strategy” was published by researchers at Ghent University, imec, and SCREEN SPE. Abstract: "In semiconductor manufacturing, lithography has often been the manufacturing step defining the smallest possible pattern dimensions. In recent ... » read more

More Efficient Side-Channel Analysis By Applying Two Deep Feature Loss Functions


A technical paper titled “Beyond the Last Layer: Deep Feature Loss Functions in Side-channel Analysis” was published by researchers at Nanyang Technological University, Radboud University, and Delft University of Technology. Abstract: "This paper provides a novel perspective on improving the efficiency of side-channel analysis by applying two deep feature loss functions: Soft Nearest Neig... » read more

Deep Learning Discovers Millions Of New Materials (Google)


A technical paper titled “Scaling deep learning for materials discovery” was published by researchers at Google DeepMind and Google Research. Abstract: "Novel functional materials enable fundamental breakthroughs across technological applications from clean energy to information processing. From microchips to batteries and photovoltaics, discovery of inorganic crystals has been bottleneck... » read more

GAA NSFETs: ML for Device and Circuit Modeling


A new technical paper titled "A Comprehensive Technique Based on Machine Learning for Device and Circuit Modeling of Gate-All-Around Nanosheet Transistors" was published by researchers at National Yang Ming Chiao Tung University. Abstract (excerpt) "Machine learning (ML) is poised to play an important part in advancing the predicting capability in semiconductor device compact modeling domai... » read more

Chiplets For Generative AI Workloads: Challenges in both HW and SW


A new technical paper titled "Challenges and Opportunities to Enable Large-Scale Computing via Heterogeneous Chiplets" was published by researchers at University of Pittsburgh, Lightelligence, and Meta. Abstract "Fast-evolving artificial intelligence (AI) algorithms such as large language models have been driving the ever-increasing computing demands in today's data centers. Heterogeneous c... » read more

Continuous Energy Monte Carlo Particle Transport On AI HW Accelerators


A technical paper titled “Efficient Algorithms for Monte Carlo Particle Transport on AI Accelerator Hardware” was published by researchers at Argonne National Laboratory, University of Chicago, and Cerebras Systems. Abstract: "The recent trend toward deep learning has led to the development of a variety of highly innovative AI accelerator architectures. One such architecture, the Cerebras... » read more

LLM Inference On CPUs (Intel)


A technical paper titled “Efficient LLM Inference on CPUs” was published by researchers at Intel. Abstract: "Large language models (LLMs) have demonstrated remarkable performance and tremendous potential across a wide range of tasks. However, deploying these models has been challenging due to the astronomical amount of model parameters, which requires a demand for large memory capacity an... » read more

Applications Of Large Language Models For Industrial Chip Design (NVIDIA)


A technical paper titled “ChipNeMo: Domain-Adapted LLMs for Chip Design” was published by researchers at NVIDIA. Abstract: "ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-ad... » read more

← Older posts Newer posts →