CMOS-Based HW Topology For Single-Cycle In-Memory XOR/XNOR Operations


A technical paper titled “CMOS-based Single-Cycle In-Memory XOR/XNOR” was published by researchers at University of Tennessee, University of Virginia, and Oak Ridge National Laboratory (ORNL). Abstract: "Big data applications are on the rise, and so is the number of data centers. The ever-increasing massive data pool needs to be periodically backed up in a secure environment. Moreover, a ... » read more

Embedded Automotive Platforms: Evaluating Power And Performance Of Image Classification And Objects Detection CNNs 


A technical paper titled “Performance/power assessment of CNN packages on embedded automotive platforms” was published by researchers at University of Modena and Reggio Emilia. Abstract: "The rise of power-efficient embedded computers based on highly-parallel accelerators opens a number of opportunities and challenges for researchers and engineers, and paved the way to the era of edge com... » read more

CNN Hardware Architecture With Weights Generator Module That Alleviates Impact Of The Memory Wall


A technical paper titled “Mitigating Memory Wall Effects in CNN Engines with On-the-Fly Weights Generation” was published by researchers at Samsung AI Center and University of Cambridge. Abstract: "The unprecedented accuracy of convolutional neural networks (CNNs) across a broad range of AI tasks has led to their widespread deployment in mobile and embedded settings. In a pursuit for high... » read more

Neural Architecture & Hardware Accelerator Co-Design Framework (Princeton/ Stanford)


A new technical paper titled "CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework" was published by researchers at Princeton University and Stanford University. "Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either... » read more

Convolutional Neural Networks: Co-Design of Hardware Architecture and Compression Algorithm


Researchers at Soongsil University (Korea) published "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Abstract: "Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction... » read more

Neuromorphic Computing: Challenges, Opportunities Including Materials, Algorithms, Devices & Ethics


This new research paper titled "2022 roadmap on neuromorphic computing and engineering" is from numerous researchers at Technical University of Denmark, Instituto de Microelectrónica de Sevilla, CSIC, University of Seville, and many others. Partial Abstract: "The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the chall... » read more

Deep Learning Applications For Material Sciences: Methods, Recent Developments


New technical paper titled "Recent advances and applications of deep learning methods in materials science" from researchers at NIST, UCSD, Lawrence Berkeley National Laboratory, Carnegie Mellon University, Northwestern University, and Columbia University. Abstract "Deep learning (DL) is one of the fastest-growing topics in materials data science, with rapidly emerging applications spanning... » read more

New Neural Processors Address Emerging Neural Networks


It’s been ten years since AlexNet, a deep learning convolutional neural network (CNN) model running on GPUs, displaced more traditional vision processing algorithms to win the ImageNet Large Scale Visual Recognition Competition (ILSVRC). AlexNet, and its successors, provided significant improvements in object classification accuracy at the cost of intense computational complexity and large da... » read more

Absence of Barren Plateaus in Quantum Convolutional Neural Networks


Abstract:  Quantum neural networks (QNNs) have generated excitement around the possibility of efficiently analyzing quantum data. But this excitement has been tempered by the existence of exponentially vanishing gradients, known as barren plateau landscapes, for many QNN architectures. Recently, quantum convolutional neural networks (QCNNs) have been proposed, involving a sequence of convol... » read more

Challenges For New AI Processor Architectures


Investment money is flooding into the development of new AI processors for the data center, but the problems here are unique, the results are unpredictable, and the competition has deep pockets and very sticky products. The biggest issue may be insufficient data about the end market. When designing a new AI processor, every design team has to answer one fundamental question — how much flex... » read more

← Older posts