Slower Metal Bogs Down SoC Performance


Metal interconnect delays are rising, offsetting some of the gains from faster transistors at each successive process node. Older architectures were born in a time when compute time was the limiter. But with interconnects increasingly viewed as the limiter on advanced nodes, there’s an opportunity to rethink how we build systems-on-chips (SoCs). ”Interconnect delay is a fundamental tr... » read more

Cerfe Labs: Spin-On Memory


Arm has spun off one of its more intriguing semiconductor research projects, a new non-volatile memory type called correlated electron materials RAM (CeRAM) that holds the potential to substantially reduce the cost of memory in everything from edge devices to high-performance computing. Headed by two former Arm Research insiders — Eric Hennenhoefer, who will serve as CEO and Greg Yeric, wh... » read more

Have Processor Counts Stalled?


Survey data suggests that additional microprocessor cores are not being added into SoCs, but you have to dig into the numbers to find out what is really going on. The reasons are complicated. They include everything from software programming models to market shifts and new use cases. So while the survey numbers appear to be flat, market and technology dynamics could have a big impact in resh... » read more

Productivity Keeping Pace With Complexity


Designs have become larger and more complex and yet design time has shortened, but team sizes remain essentially flat. Does this show that productivity is keeping pace with complexity for everyone? The answer appears to be yes, at least for now, for a multitude of reasons. More design and IP reuse is using more and larger IP blocks and subsystems. In addition, the tools are improving, and mo... » read more

DRAM, 3D NAND Face New Challenges


It’s been a topsy-turvy period for the memory market, and it's not over. So far in 2020, demand has been slightly better than expected for the two main memory types — 3D NAND and DRAM. But now there is some uncertainty in the market amid a slowdown, inventory issues and an ongoing trade war. In addition, the 3D NAND market is moving toward a new technology generation, but some are enc... » read more

From Data Center To End Device: AI/ML Inferencing With GDDR6


Created to support 3D gaming on consoles and PCs, GDDR packs performance that makes it an ideal solution for AI/ML inferencing. As inferencing migrates from the heart of the data center to the network edge, and ultimately to a broad range of AI-powered IoT devices, GDDR memory’s combination of high bandwidth, low latency, power efficiency and suitability for high-volume applications will be i... » read more

New Architectures, Much Faster Chips


The chip industry is making progress in multiple physical dimensions and with multiple architectural approaches, setting the stage for huge performance increases based on more modular and heterogeneous designs, new advanced packaging options, and continued scaling of digital logic for at least a couple more process nodes. A number of these changes have been discussed in recent conferences. I... » read more

What Happened To Execute-in-Place?


Executing code directly from non-volatile memory, where it is stored, greatly simplifies compute architectures — especially for simple embedded devices like microcontrollers (MCUs). However, the divergence of memory and logic processes has made that nearly impossible today. The term “execute-in-place,” or ”XIP,” originated with the embedded NOR memory in MCUs that made XIP viable. ... » read more

The Challenge Of Keeping AI Systems Current


Semiconductor Engineering sat down to discuss AI and its move to the edge with Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus; Kris Ardis, executive director at Maxim Integrated; Steve Roddy, vice president of Arm's Products Learning Group; and Vinay Mehta, inference technical marketing manager at Flex Logix. What follows are excerpts of that ... » read more

Scaling AI/ML Training Performance With HBM2E Memory


In my April SemiEngineering Low Power-High Performance blog, I wrote: “Today, AI/ML neural network training models can exceed 10 billion parameters, soon it will be over 100 billion.” “Soon” didn’t take long to arrive. At the end of May, OpenAI unveiled a new 175-billion parameter GPT-3 language model. This represented a more that 100X jump over the size of GPT-2’s 1.5 billion param... » read more

← Older posts Newer posts →