中文 English

Convolutional Neural Networks: Co-Design of Hardware Architecture and Compression Algorithm


Researchers at Soongsil University (Korea) published "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Abstract: "Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction... » read more

Audio, Visual Advances Intensify IC Design Tradeoffs


A spike in the number of audio and visual sensors is greatly increasing design complexity in chips and systems, forcing engineers to make tradeoffs that can affect performance, power, and cost. Collectively, these sensors generate so much data that designers must consider where to process different data, how to prioritize it, and how to optimize it for specific applications. The tradeoffs in... » read more

Energy-efficient Non Uniform Last Level Caches for Chip-multiprocessors Based on Compression


Abstract "With technology scaling, the size of cache systems in chip-multiprocessors (CMPs) has been dramatically increased to efficiently store and manipulate a large amount of data in future applications and decrease the gap between cores and off-chip memory accesses. For future CMPs architecting, 3D stacking of LLCs has been recently introduced as a new methodology to combat to performance ... » read more

Auto Displays: Bigger, Brighter, More Numerous


Displays are rapidly becoming more critical to the central brains in automobiles, accelerating the adoption and evolution of this technology to handle multiple types of audio, visual, and other data traffic coming into and flowing throughout the vehicle. These changes are having a broad impact on the entire design-through-manufacturing flow for display chip architectures. In the past, these ... » read more

New Power, Performance Options At The Edge


Increasing compute intelligence at the edge is forcing chip architects to rethink how computing gets partitioned and prioritized, and what kinds of processing elements and memory configurations work best for a particular application. Sending raw data to the cloud for processing is both time- and resource-intensive, and it's often unnecessary because most of the data collected by a growing nu... » read more

11 Ways To Reduce AI Energy Consumption


As the machine-learning industry evolves, the focus has expanded from merely solving the problem to solving the problem better. “Better” often has meant accuracy or speed, but as data-center energy budgets explode and machine learning moves to the edge, energy consumption has taken its place alongside accuracy and speed as a critical issue. There are a number of approaches to neural netw... » read more

MIPI DSI-2 With VESA DSC Drives Performance For Next-Generation Displays


The Mobile Industry Processor Interface (MIPI) Alliance was formed in 2003 to address the fragmentation in the essential video interface technologies for cameras and displays in phones. Over the years, the alliance has significantly expanded its scope to publish specifications covering physical layer, multimedia, chip-to-chip, control/data, and debug/trace and software. With its broader mission... » read more

Power/Performance Bits: Dec. 31


Three-valued memory Scientists at the Tokyo Institute of Technology and the University of Tokyo developed a new three-valued memory device inspired by solid lithium-ion batteries which could potentially serve as low power consumption RAM. The new device consisted of a stack of three solid layers made of lithium, lithium phosphate, and gold. This stack is essentially a miniature low-capacity... » read more

Where Is The Edge?


Mike Fitton, senior director of strategic planning at Achronix, talks about what the edge will look like, how that fits in with the cloud, what the requirements are both for processing and for storage, and how this concept will evolve.   Edge Knowledge Center Top stories, videos, blogs, white papers all related to the Edge » read more

Machine Learning Inferencing At The Edge


Ian Bratt, fellow in Arm's machine learning group, talks about why machine learning inferencing at the edge is so difficult, what are the tradeoffs, how to optimize data movement, how to accelerate that movement, and how it differs from developing other types of processors. » read more

← Older posts