New Architectures, Much Faster Chips


The chip industry is making progress in multiple physical dimensions and with multiple architectural approaches, setting the stage for huge performance increases based on more modular and heterogeneous designs, new advanced packaging options, and continued scaling of digital logic for at least a couple more process nodes. A number of these changes have been discussed in recent conferences. I... » read more

What Happened To Execute-in-Place?


Executing code directly from non-volatile memory, where it is stored, greatly simplifies compute architectures — especially for simple embedded devices like microcontrollers (MCUs). However, the divergence of memory and logic processes has made that nearly impossible today. The term “execute-in-place,” or ”XIP,” originated with the embedded NOR memory in MCUs that made XIP viable. ... » read more

The Challenge Of Keeping AI Systems Current


Semiconductor Engineering sat down to discuss AI and its move to the edge with Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus; Kris Ardis, executive director at Maxim Integrated; Steve Roddy, vice president of Arm's Products Learning Group; and Vinay Mehta, inference technical marketing manager at Flex Logix. What follows are excerpts of that ... » read more

Scaling AI/ML Training Performance With HBM2E Memory


In my April SemiEngineering Low Power-High Performance blog, I wrote: “Today, AI/ML neural network training models can exceed 10 billion parameters, soon it will be over 100 billion.” “Soon” didn’t take long to arrive. At the end of May, OpenAI unveiled a new 175-billion parameter GPT-3 language model. This represented a more that 100X jump over the size of GPT-2’s 1.5 billion param... » read more

Memory Access In AI Systems


Memory access is a key consideration in AI system design. Ron Lowman, strategic marketing manager for IP at Synopsys, talks about how memory affects overall power consumption, why partitioning of on-chip and off-chip is so critical to performance and power, and how this changes from the cloud to the edge. » read more

Semiconductor Memory Evolution And Current Challenges


The very first all-electronic memory was the Williams-Kilburn tube, developed in 1947 at Manchester University. It used a cathode ray tube to store bits as dots on the screen’s surface. The evolution of computer memory since that time has included numerous magnetic memory systems, such as magnetic drum memory, magnetic core memory, magnetic tape drive, and magnetic bubble memory. Since the 19... » read more

Smaller Nodes, Much Bigger Problems


João Geada, chief technologist at Ansys, sat down with Semiconductor Engineering to talk about device scaling, advanced packaging, increasing complexity and the growing role of AI. What follows are excerpts of that conversation. SE: We've been pushing along Moore's Law for roughly a half-century. What sorts of problems are you seeing now that you didn't see a couple nodes ago? Geada: The... » read more

An Expanding Application Space For GDDR6 Memory


The origins of graphics double data rate (GDDR) memory can be traced to the rise of 3D gaming on PCs and consoles. The first GPUs used single data rate (SDR) and double data rate (DDR) DRAM, the same memory used for CPU main memory. The quest for higher frame rates at higher resolutions drove the need for a graphics-workload specific memory solution. The commercial success of gaming PCs and con... » read more

Moving Data And Computing Closer Together


The speed of processors has increased to the point where they often are no longer the performance bottleneck for many systems. It's now about data access. Moving data around costs both time and power, and developers are looking for ways to reduce the distances that data has to move. That means bringing data and memory nearer to each other. “Hard drives didn't have enough data flow to cr... » read more

Power Impact At The Physical Layer Causes Downstream Effects


Data movement is rapidly emerging as one of the top design challenges, and it is being complicated by new chip architectures and physical effects caused by increasing density at advanced nodes and in multi-chip systems. Until the introduction of the latest revs of high-bandwidth memory, as well as GDDR6, memory was considered the next big bottleneck. But other compute bottlenecks have been e... » read more

← Older posts Newer posts →