Training a Quantum Neural Network Requires Only A Small Amount of Data


A new research paper titled "Generalization in quantum machine learning from few training data" was published by researchers at Technical University of Munich, Munich Center for Quantum Science and Technology (MCQST), Caltech, and Los Alamos National Lab. “Many people believe that quantum machine learning will require a lot of data. We have rigorously shown that for many relevant problems,... » read more

Biocompatible Bilayer Graphene-Based Artificial Synaptic Transistors (BLAST) Capable of Mimicking Synaptic Behavior


This new technical paper titled "Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing" was published by researchers at The University of Texas at Austin and Sandia National Laboratories. Abstract "CMOS-based computing systems that employ the von Neumann architecture are relatively limited when it comes to para... » read more

Distilling The Essence Of Four DAC Keynotes


Chip design and verification are facing a growing number of challenges. How they will be solved — particularly with the addition of machine learning — is a major question for the EDA industry, and it was a common theme among four keynote speakers at this month's Design Automation Conference. DAC has returned as a live event, and this year's keynotes involved the leaders of a systems comp... » read more

Improving Yield With Machine Learning


Machine learning is becoming increasingly valuable in semiconductor manufacturing, where it is being used to improve yield and throughput. This is especially important in process control, where data sets are noisy. Neural networks can identify patterns that exceed human capability, or perform classification faster. Consequently, they are being deployed across a variety of manufacturing proce... » read more

ISA Extension For Low-Precision NN Training On RISC-V Cores


New technical paper titled "MiniFloat-NN and ExSdotp: An ISA Extension and a Modular Open Hardware Unit for Low-Precision Training on RISC-V cores" from researchers at IIS, ETH Zurich; DEI, University of Bologna; and Axelera AI. Abstract "Low-precision formats have recently driven major breakthroughs in neural network (NN) training and inference by reducing the memory footprint of the N... » read more

AI At The Edge: Optimizing AI Algorithms Without Sacrificing Accuracy


The ultimate measure of success for AI will be how much it increases productivity in our daily lives. However, the industry has huge challenges in evaluating progress. The vast number of AI applications is in constant churn: finding the right algorithm, optimizing the algorithm, and finding the right tools. In addition, complex hardware engineering is rapidly being updated with many different s... » read more

Deep Reinforcement Learning to Dynamically Configure NoC Resources


New research paper titled "Deep Reinforcement Learning Enabled Self-Configurable Networks-on-Chip for High-Performance and Energy-Efficient Computing Systems" from Md Farhadur Reza at Eastern Illinois University. Find the open access technical paper here. Published June 2022. M. F. Reza, "Deep Reinforcement Learning Enabled Self-Configurable Networks-on-Chip for High-Performance and Energ... » read more

ETH Zurich: PIM (Processing In Memory) Architecture, UPMEM & PrIM Benchmarks


New paper technical titled "Benchmarking a New Paradigm: An Experimental Analysis of a Real Processing-in-Memory Architecture" led by researchers at ETH Zurich. Researchers provide a comprehensive analysis of the first publicly-available real-world PIM architecture, UPMEM, and introduce PrIM (Processing-In-Memory benchmarks), a benchmark suite of 16 workloads from different application domai... » read more

Deep Learning Applications For Material Sciences: Methods, Recent Developments


New technical paper titled "Recent advances and applications of deep learning methods in materials science" from researchers at NIST, UCSD, Lawrence Berkeley National Laboratory, Carnegie Mellon University, Northwestern University, and Columbia University. Abstract "Deep learning (DL) is one of the fastest-growing topics in materials data science, with rapidly emerging applications spanning... » read more

Novel Spintronic Neuro-mimetic Device Emulating the LIF Neuron Dynamics w/High Energy Efficiency & Compact Footprints


New technical paper titled "Noise resilient leaky integrate-and-fire neurons based on multi-domain spintronic devices" from researchers at Purdue University. Abstract "The capability of emulating neural functionalities efficiently in hardware is crucial for building neuromorphic computing systems. While various types of neuro-mimetic devices have been investigated, it remains challenging to... » read more

← Older posts Newer posts →