Compute-In Memory Accelerators Up-End Network Design Tradeoffs


An explosion in the amount of data, coupled with the negative impact on performance and power for moving that data, is rekindling interest around in-memory processing as an alternative to moving data back and forth between the memory and the processor. Compute-in-memory (CIM) arrays based on either conventional memory elements like DRAM and NAND flash, as well as emerging non-volatile memori... » read more

An Increasingly Complicated Relationship With Memory


The relationship between a processor and its memory used to be quite simple, but in modern SoCs there are multiple heterogeneous processors and accelerators, each needing a different means of accessing memory for maximum efficiency. Compromises are being made in order to preserve the unified programming model of the past, but the pressures are increasing for some fundamental changes. It does... » read more

Utilizing Computational Memory


For systems to become faster and consume less power, they must stop wasting the power required to move data around and start adding processing near memory. This approach has been proven, and products are entering the marketplace designed to fill a number of roles. Processing near memory, also known as computational memory, has been hiding in the shadows for more than a decade. Ever since the... » read more

Pushing Memory Harder


In an optimized system, no component is waiting for another component while there is useful work to be done. Unfortunately, this is not the case with the processor/memory interface. Put simply, memory cannot keep up. Accessing memory is slow, and it can consume a significant fraction of the power budget. And the general consensus is this problem is not going away anytime soon, despite effort... » read more

In Memory And Near-Memory Compute


Steven Woo, Rambus fellow and distinguished inventor, talks about the amount of power required to store data and to move it out of memory to where processing is done. This can include changes to memory, but it also can include rethinking compute architectures from the ground up to achieve up to 1 million times better performance in highly specialized systems. Related Find more memor... » read more

Power/Performance Bits: Nov. 20


In-memory compute accelerator Engineers at Princeton University built a programmable chip that features an in-memory computing accelerator. Targeted at deep learning inferencing, the chip aims to reduce the bottleneck between memory and compute in traditional architectures. The team's key to performing compute in memory was using capacitors rather than transistors. The capacitors were paire... » read more

Newer posts →