Dealing With Sub-Threshold Variation


Chipmakers are pushing into sub-threshold operation in an effort to prolong battery life and reduce energy costs, adding a whole new set of challenges for design teams. While process and environmental variation long have been concerns for advanced silicon process nodes, most designs operate in the standard “super-threshold” regime. Sub-threshold designs, in contrast, have unique variatio... » read more

Slower Metal Bogs Down SoC Performance


Metal interconnect delays are rising, offsetting some of the gains from faster transistors at each successive process node. Older architectures were born in a time when compute time was the limiter. But with interconnects increasingly viewed as the limiter on advanced nodes, there’s an opportunity to rethink how we build systems-on-chips (SoCs). ”Interconnect delay is a fundamental tr... » read more

DRAM, 3D NAND Face New Challenges


It’s been a topsy-turvy period for the memory market, and it's not over. So far in 2020, demand has been slightly better than expected for the two main memory types — 3D NAND and DRAM. But now there is some uncertainty in the market amid a slowdown, inventory issues and an ongoing trade war. In addition, the 3D NAND market is moving toward a new technology generation, but some are enc... » read more

What Happened To Execute-in-Place?


Executing code directly from non-volatile memory, where it is stored, greatly simplifies compute architectures — especially for simple embedded devices like microcontrollers (MCUs). However, the divergence of memory and logic processes has made that nearly impossible today. The term “execute-in-place,” or ”XIP,” originated with the embedded NOR memory in MCUs that made XIP viable. ... » read more

Memory Access In AI Systems


Memory access is a key consideration in AI system design. Ron Lowman, strategic marketing manager for IP at Synopsys, talks about how memory affects overall power consumption, why partitioning of on-chip and off-chip is so critical to performance and power, and how this changes from the cloud to the edge. » read more

Moving Data And Computing Closer Together


The speed of processors has increased to the point where they often are no longer the performance bottleneck for many systems. It's now about data access. Moving data around costs both time and power, and developers are looking for ways to reduce the distances that data has to move. That means bringing data and memory nearer to each other. “Hard drives didn't have enough data flow to cr... » read more

Making Sense Of PUFs


As security becomes a principal design consideration, physically unclonable functions (PUFs) are seeing renewed interest as new players emerge onto the market. PUFs can play a central role in hardware roots of trust (HRoTs), but the messaging in the market can make it confusing to understand the different types of PUF as well as their pros and cons. PUFs leverage some uncertain aspect of som... » read more

NVM Reliability Challenges And Tradeoffs


This second of two parts looks at different memories and possible solutions. Part one can be found here. While various NVM technologies, such as PCRAM, MRAM, ReRAM and NRAM share similar high-level traits, their physical renderings are quite different. That provides each with its own set of challenges and solutions. PCRAM has had a fraught history. Initially released by Samsung, Micron, a... » read more

Memory Issues For AI Edge Chips


Several companies are developing or ramping up AI chips for systems on the network edge, but vendors face a variety of challenges around process nodes and memory choices that can vary greatly from one application to the next. The network edge involves a class of products ranging from cars and drones to security cameras, smart speakers and even enterprise servers. All of these applications in... » read more

Scaling Up Compute-In-Memory Accelerators


Researchers are zeroing in on new architectures to boost performance by limiting the movement of data in a device, but this is proving to be much harder than it appears. The argument for memory-based computation is familiar by now. Many important computational workloads involve repetitive operations on large datasets. Moving data from memory to the processing unit and back — the so-called ... » read more

← Older posts Newer posts →