Convolutional Neural Networks: Co-Design of Hardware Architecture and Compression Algorithm


Researchers at Soongsil University (Korea) published "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Abstract: "Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction... » read more

HW/SW Co-Design to Configure DNN Models On Energy Harvesting Devices


New technical paper titled "EVE: Environmental Adaptive Neural Network Models for Low-Power Energy Harvesting System" was published by researchers at UT San Antonio, University of Connecticut, and Lehigh University. According to the abstract: "This paper proposes EVE, an automated machine learning (autoML) co-exploration framework to search for desired multi-models with shared weights for... » read more

Multi-Task Network Pruning and Embedded Optimization for Real-time Deployment in ADAS


Abstract: "Camera-based Deep Learning algorithms are increasingly needed for perception in Automated Driving systems. However, constraints from the automotive industry challenge the deployment of CNNs by imposing embedded systems with limited computational resources. In this paper, we propose an approach to embed a multi- task CNN network under such conditions on a commercial prototy... » read more

AI Inference Acceleration


Geoff Tate, CEO of Flex Logix, talks about considerations in choosing an AI inference accelerator, how that fits in with other processing elements on a chip, what tradeoffs are involved with reducing latency, and what considerations are the most important. » read more

Compiling And Optimizing Neural Nets


Edge inference engines often run a slimmed-down real-time engine that interprets a neural-network model, invoking kernels as it goes. But higher performance can be achieved by pre-compiling the model and running it directly, with no interpretation — as long as the use case permits it. At compile time, optimizations are possible that wouldn’t be available if interpreting. By quantizing au... » read more

Big Changes In AI Design


Semiconductor Engineering sat down to discuss AI and its move to the edge with Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus; Kris Ardis, executive director at Maxim Integrated; Steve Roddy, vice president of Arm's Products Learning Group; and Vinay Mehta, inference technical marketing manager at Flex Logix. What follows are excerpts of that ... » read more

Building AI SoCs


Ron Lowman, strategic marketing manager at Synopsys, looks at where AI is being used and how to develop chips when the algorithms are in a state of almost constant change. That includes what moves to the edge versus the data center, how algorithms are being compressed, and what techniques are being used to speed up these chips and reduce power. https://youtu.be/d32jtdFwpcE    ... » read more