Using BDA To to Predict SAQP Pitch Walk


A new technical paper titled "Bayesian dropout approximation in deep learning neural networks: analysis of self-aligned quadruple patterning" was published by researchers at IBM TJ Watson Research Center and Rensselaer Polytechnic Institute. Find the technical paper here. Published November 2022.  Open Access. Scott D. Halle, Derren N. Dunn, Allen H. Gabor, Max O. Bloomfield, and Mark Sh... » read more

New Class of Electrically Driven Optical Nonvolatile Memory


A new technical paper titled "Electrical Programmable Multi-Level Non-volatile Photonic Random-Access Memory" was published by researchers at George Washington University, Optelligence, MIT, and the University of Central Florida. Researchers demonstrate "a multi-state electrically-programmed low-loss non-volatile photonic memory based on a broadband transparent phase change material (Ge2Sb2S... » read more

Training a ML model On An Intelligent Edge Device Using Less Than 256KB Memory


A new technical paper titled "On-Device Training Under 256KB Memory" was published by researchers at MIT and MIT-IBM Watson AI Lab. “Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a bro... » read more

Transistor-Free Compute-In-Memory Architecture


A new technical paper titled "Reconfigurable Compute-In-Memory on Field-Programmable Ferroelectric Diodes" was recently published by researchers at University of Pennsylvania, Sandia National Labs, and Brookhaven National Lab. The compute-in-memory design is different as it is completely transistor-free. “Even when used in a compute-in-memory architecture, transistors compromise the access... » read more

Put A Data Center In Your Phone!


Datacenters heavily leverage FPGAs for AI acceleration. Why not do the same for low power edge applications with embedded FPGA (eFPGA)? It’s common knowledge for anyone connected to the cloud computing industry that data centers heavily rely on FPGAs for programmable accelerators enabling high performance computing for AI training and inferencing. These heterogeneous computing solution... » read more

FP8: Cross-Industry Hardware Specification For AI Training And Inference (Arm, Intel, Nvidia)


Arm, Intel, and Nvidia proposed a specification for an 8-bit floating point (FP8) format that could provide a common interchangeable format that works for both AI training and inference and allow AI models to operate and perform consistently across hardware platforms. Find the technical paper titled " FP8 Formats For Deep Learning" here. Published Sept 2022. Abstract: "FP8 is a natural p... » read more

New Method of Comparing Neural Networks (Los Alamos National Lab)


A new research paper titled "If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness" from researchers at Los Alamos National Laboratory (LANL) and was recently presented at the Conference on Uncertainty in Artificial Intelligence. The team developed a new approach for comparing neural networks and "applied their new metric of network simila... » read more

Convolutional Neural Networks: Co-Design of Hardware Architecture and Compression Algorithm


Researchers at Soongsil University (Korea) published "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Abstract: "Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction... » read more

Vulnerability of Neural Networks Deployed As Black Boxes Across Accelerated HW Through Electromagnetic Side Channels


This technical paper titled "Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel" was presented by researchers at Columbia University, Adobe Research and University of Toronto at the 31st USENIX Security Symposium in August 2022. Abstract: "Neural network applications have become popular in both enterprise and personal settings. Network solutions are tune... » read more

Training a Quantum Neural Network Requires Only A Small Amount of Data


A new research paper titled "Generalization in quantum machine learning from few training data" was published by researchers at Technical University of Munich, Munich Center for Quantum Science and Technology (MCQST), Caltech, and Los Alamos National Lab. “Many people believe that quantum machine learning will require a lot of data. We have rigorously shown that for many relevant problems,... » read more

← Older posts Newer posts →