Hardware Acceleration Approach for KAN Via Algorithm-Hardware Co-Design


A new technical paper titled "Hardware Acceleration of Kolmogorov-Arnold Network (KAN) for Lightweight Edge Inference" was published by researchers at Georgia Tech, TSMC and National Tsing Hua University. Abstract "Recently, a novel model named Kolmogorov-Arnold Networks (KAN) has been proposed with the potential to achieve the functionality of traditional deep neural networks (DNNs) using ... » read more

More Efficient Side-Channel Analysis By Applying Two Deep Feature Loss Functions


A technical paper titled “Beyond the Last Layer: Deep Feature Loss Functions in Side-channel Analysis” was published by researchers at Nanyang Technological University, Radboud University, and Delft University of Technology. Abstract: "This paper provides a novel perspective on improving the efficiency of side-channel analysis by applying two deep feature loss functions: Soft Nearest Neig... » read more

A Search Framework That Optimizes Hybrid-Device IMC Architectures For DNNs, Using Chiplets


A technical paper titled “HyDe: A Hybrid PCM/FeFET/SRAM Device-search for Optimizing Area and Energy-efficiencies in Analog IMC Platforms” was published by researchers at Yale University. Abstract: "Today, there are a plethora of In-Memory Computing (IMC) devices- SRAMs, PCMs & FeFETs, that emulate convolutions on crossbar-arrays with high throughput. Each IMC device offers its own pr... » read more

Leveraging Large Language Models (LLMs) To Perform SW-HW Co-Design


A technical paper titled “On the Viability of using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators” was published by researchers at University of Notre Dame. Abstract: "Deep Neural Networks (DNNs) have demonstrated impressive performance across a wide range of tasks. However, deploying DNNs on edge devices poses significant challenges due to stringent power and com... » read more

Asynchronously Parallel Optimization Method For Sizing Analog Transistors Using Deep Neural Network Learning


A new technical paper titled "APOSTLE: Asynchronously Parallel Optimization for Sizing Analog Transistors Using DNN Learning" was published by researchers at UT Austin and Analog Devices. Abstract "Analog circuit sizing is a high-cost process in terms of the manual effort invested and the computation time spent. With rapidly developing technology and high market demand, bringing automated s... » read more

Review of Tools & Techniques for DL Edge Inference


A new technical paper titled "Efficient Acceleration of Deep Learning Inference on Resource-Constrained Edge Devices: A Review" was published in "Proceedings of the IEEE" by researchers at University of Missouri and Texas Tech University. Abstract: Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted in breakthroughs in many areas. However, deploying thes... » read more

Memory and Energy-Efficient Batch Normalization Hardware


A new technical paper titled "LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training" was published by researchers at DGIST (Daegu Gyeongbuk Institute of Science and Technology). The work was supported by Samsung Research Funding Incubation Center. Abstract: "When training early-stage deep neural networks (DNNs), generating intermediate features via con... » read more

Multiexpert Adversarial Regularization For Robust And Data-Efficient Deep Supervised Learning


Deep neural networks (DNNs) can achieve high accuracy when there is abundant training data that has the same distribution as the test data. In practical applications, data deficiency is often a concern. For classification tasks, the lack of enough labeled images in the training set often results in overfitting. Another issue is the mismatch between the training and the test domains, which resul... » read more

Using Silicon Photonics To Reduce Latency On Edge Devices


A new technical paper titled "Delocalized photonic deep learning on the internet’s edge" was published by researchers at MIT and Nokia Corporation. “Every time you want to run a neural network, you have to run the program, and how fast you can run the program depends on how fast you can pipe the program in from memory. Our pipe is massive — it corresponds to sending a full feature-leng... » read more

Vulnerability of Neural Networks Deployed As Black Boxes Across Accelerated HW Through Electromagnetic Side Channels


This technical paper titled "Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel" was presented by researchers at Columbia University, Adobe Research and University of Toronto at the 31st USENIX Security Symposium in August 2022. Abstract: "Neural network applications have become popular in both enterprise and personal settings. Network solutions are tune... » read more

← Older posts