中文 English

Neural Architecture & Hardware Accelerator Co-Design Framework (Princeton/ Stanford)


A new technical paper titled "CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework" was published by researchers at Princeton University and Stanford University. "Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either... » read more

Convolutional Neural Networks: Co-Design of Hardware Architecture and Compression Algorithm


Researchers at Soongsil University (Korea) published "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Abstract: "Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction... » read more

Neuromorphic Computing: Challenges, Opportunities Including Materials, Algorithms, Devices & Ethics


This new research paper titled "2022 roadmap on neuromorphic computing and engineering" is from numerous researchers at Technical University of Denmark, Instituto de Microelectrónica de Sevilla, CSIC, University of Seville, and many others. Partial Abstract: "The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the chall... » read more

Deep Learning Applications For Material Sciences: Methods, Recent Developments


New technical paper titled "Recent advances and applications of deep learning methods in materials science" from researchers at NIST, UCSD, Lawrence Berkeley National Laboratory, Carnegie Mellon University, Northwestern University, and Columbia University. Abstract "Deep learning (DL) is one of the fastest-growing topics in materials data science, with rapidly emerging applications spanning... » read more

New Neural Processors Address Emerging Neural Networks


It’s been ten years since AlexNet, a deep learning convolutional neural network (CNN) model running on GPUs, displaced more traditional vision processing algorithms to win the ImageNet Large Scale Visual Recognition Competition (ILSVRC). AlexNet, and its successors, provided significant improvements in object classification accuracy at the cost of intense computational complexity and large da... » read more

Absence of Barren Plateaus in Quantum Convolutional Neural Networks


Abstract:  Quantum neural networks (QNNs) have generated excitement around the possibility of efficiently analyzing quantum data. But this excitement has been tempered by the existence of exponentially vanishing gradients, known as barren plateau landscapes, for many QNN architectures. Recently, quantum convolutional neural networks (QCNNs) have been proposed, involving a sequence of convol... » read more

Challenges For New AI Processor Architectures


Investment money is flooding into the development of new AI processors for the data center, but the problems here are unique, the results are unpredictable, and the competition has deep pockets and very sticky products. The biggest issue may be insufficient data about the end market. When designing a new AI processor, every design team has to answer one fundamental question — how much flex... » read more

There’s More To Machine Learning Than CNNs


Neural networks – and convolutional neural networks (CNNs) in particular – have received an abundance of attention over the last few years, but they're not the only useful machine-learning structures. There are numerous other ways for machines to learn how to solve problems, and there is room for alternative machine-learning structures. “Neural networks can do all this really comple... » read more

Xilinx AI Engines And Their Applications


This white paper explores the architecture, applications, and benefits of using Xilinx's new AI Engine for compute intensive applications like 5G cellular and machine learning DNN/CNN. 5G requires between five to 10 times higher compute density when compared with prior generations; AI Engines have been optimized for DSP, meeting both the throughput and compute requirements to deliver the hig... » read more

Fast, Low-Power Inferencing


Power and performance are often thought of as opposing goals, opposite sides of the same coin if you will. A system can be run really fast, but it will burn a lot of power. Ease up on the accelerator and power consumption goes down, but so does performance. Optimizing for both power and performance is challenging. Inferencing algorithms for Convolutional Neural Networks (CNN) are compute int... » read more

← Older posts