Energy-Efficient DRAM↔PIM Transfers for PIM Systems (KAIST)


A new technical paper titled "PIM-MMU: A Memory Management Unit for Accelerating Data Transfers in Commercial PIM Systems" was published by researchers at KAIST. Abstract "Processing-in-memory (PIM) has emerged as a promising solution for accelerating memory-intensive workloads as they provide high memory bandwidth to the processing units. This approach has drawn attention not only from the... » read more

Computational SRAM (C-SRAM) Solution Combining In- and Near-Memory Computing Approaches


New academic paper titled "Towards a Truly Integrated Vector Processing Unit for Memory-bound Applications Based on a Cost-competitive Computational SRAM Design Solution", from researchers at Univ. Grenoble Alpes, CEA-LIST. Abstract "This article presents Computational SRAM (C-SRAM) solution combining In- and Near-Memory Computing approaches. It allows performing arithmetic, logic, and co... » read more

Making Sense Of New Edge-Inference Architectures


New edge-inference machine-learning architectures have been arriving at an astounding rate over the last year. Making sense of them all is a challenge. To begin with, not all ML architectures are alike. One of the complicating factors in understanding the different machine-learning architectures is the nomenclature used to describe them. You’ll see terms like “sea-of-MACs,” “systolic... » read more

Data Overload In The Data Center


Dealing with increasing volumes of data inside of data centers requires an understanding of architectures, the flow of data between memory and processors, bandwidth, cache coherency and new memory types and interfaces. Gary Ruggles, senior product marketing manager at Synopsys, talks about how these systems are being revamped to improve performance and reduce power. » read more

New Architectures, Much Faster Chips


The chip industry is making progress in multiple physical dimensions and with multiple architectural approaches, setting the stage for huge performance increases based on more modular and heterogeneous designs, new advanced packaging options, and continued scaling of digital logic for at least a couple more process nodes. A number of these changes have been discussed in recent conferences. I... » read more

Memory Access In AI Systems


Memory access is a key consideration in AI system design. Ron Lowman, strategic marketing manager for IP at Synopsys, talks about how memory affects overall power consumption, why partitioning of on-chip and off-chip is so critical to performance and power, and how this changes from the cloud to the edge. » read more

Scaling Up Compute-In-Memory Accelerators


Researchers are zeroing in on new architectures to boost performance by limiting the movement of data in a device, but this is proving to be much harder than it appears. The argument for memory-based computation is familiar by now. Many important computational workloads involve repetitive operations on large datasets. Moving data from memory to the processing unit and back — the so-called ... » read more

What Engineers Are Reading And Watching


By Brian Bailey And Ed Sperling An important indicator of where the chip industry is heading is what engineers are reading and what videos they are watching. While some subjects remain on top, such as the level of interest in the latest manufacturing technologies, other areas come and go. The stories with the biggest traffic numbers are almost identical to last year. Readers want to know wh... » read more

Memory Subsystems In Edge Inferencing Chips


Geoff Tate, CEO of Flex Logix, talks about key issues in a memory subsystem in an inferencing chip, how factors like heat can affect performance, and where these kinds of chips will be used. » read more

Enabling Practical Processing in and near Memory for Data-Intensive Computing


Source: ETH Zurich and Carnegie Mellon University Talk at DAC 2019. Technical Paper link » read more

← Older posts