Put A Data Center In Your Phone!


Datacenters heavily leverage FPGAs for AI acceleration. Why not do the same for low power edge applications with embedded FPGA (eFPGA)? It’s common knowledge for anyone connected to the cloud computing industry that data centers heavily rely on FPGAs for programmable accelerators enabling high performance computing for AI training and inferencing. These heterogeneous computing solution... » read more

FP8: Cross-Industry Hardware Specification For AI Training And Inference (Arm, Intel, Nvidia)


Arm, Intel, and Nvidia proposed a specification for an 8-bit floating point (FP8) format that could provide a common interchangeable format that works for both AI training and inference and allow AI models to operate and perform consistently across hardware platforms. Find the technical paper titled " FP8 Formats For Deep Learning" here. Published Sept 2022. Abstract: "FP8 is a natural p... » read more

New Method of Comparing Neural Networks (Los Alamos National Lab)


A new research paper titled "If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness" from researchers at Los Alamos National Laboratory (LANL) and was recently presented at the Conference on Uncertainty in Artificial Intelligence. The team developed a new approach for comparing neural networks and "applied their new metric of network simila... » read more

Convolutional Neural Networks: Co-Design of Hardware Architecture and Compression Algorithm


Researchers at Soongsil University (Korea) published "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Abstract: "Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction... » read more

Vulnerability of Neural Networks Deployed As Black Boxes Across Accelerated HW Through Electromagnetic Side Channels


This technical paper titled "Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel" was presented by researchers at Columbia University, Adobe Research and University of Toronto at the 31st USENIX Security Symposium in August 2022. Abstract: "Neural network applications have become popular in both enterprise and personal settings. Network solutions are tune... » read more

Training a Quantum Neural Network Requires Only A Small Amount of Data


A new research paper titled "Generalization in quantum machine learning from few training data" was published by researchers at Technical University of Munich, Munich Center for Quantum Science and Technology (MCQST), Caltech, and Los Alamos National Lab. “Many people believe that quantum machine learning will require a lot of data. We have rigorously shown that for many relevant problems,... » read more

Biocompatible Bilayer Graphene-Based Artificial Synaptic Transistors (BLAST) Capable of Mimicking Synaptic Behavior


This new technical paper titled "Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing" was published by researchers at The University of Texas at Austin and Sandia National Laboratories. Abstract "CMOS-based computing systems that employ the von Neumann architecture are relatively limited when it comes to para... » read more

Distilling The Essence Of Four DAC Keynotes


Chip design and verification are facing a growing number of challenges. How they will be solved — particularly with the addition of machine learning — is a major question for the EDA industry, and it was a common theme among four keynote speakers at this month's Design Automation Conference. DAC has returned as a live event, and this year's keynotes involved the leaders of a systems comp... » read more

Improving Yield With Machine Learning


Machine learning is becoming increasingly valuable in semiconductor manufacturing, where it is being used to improve yield and throughput. This is especially important in process control, where data sets are noisy. Neural networks can identify patterns that exceed human capability, or perform classification faster. Consequently, they are being deployed across a variety of manufacturing proce... » read more

ISA Extension For Low-Precision NN Training On RISC-V Cores


New technical paper titled "MiniFloat-NN and ExSdotp: An ISA Extension and a Modular Open Hardware Unit for Low-Precision Training on RISC-V cores" from researchers at IIS, ETH Zurich; DEI, University of Bologna; and Axelera AI. Abstract "Low-precision formats have recently driven major breakthroughs in neural network (NN) training and inference by reducing the memory footprint of the N... » read more

← Older posts Newer posts →