Pushing Memory Harder


In an optimized system, no component is waiting for another component while there is useful work to be done. Unfortunately, this is not the case with the processor/memory interface. Put simply, memory cannot keep up. Accessing memory is slow, and it can consume a significant fraction of the power budget. And the general consensus is this problem is not going away anytime soon, despite effort... » read more

Memory Subsystems In Edge Inferencing Chips


Geoff Tate, CEO of Flex Logix, talks about key issues in a memory subsystem in an inferencing chip, how factors like heat can affect performance, and where these kinds of chips will be used. » read more

Why DRAM Won’t Go Away


Semiconductor Engineering sat down to talk about DRAM's future with Frank Ferro, senior director of product management at Rambus; Marc Greenberg, group director for product marketing at Cadence; Graham Allan, senior product marketing manager for DDR PHYs at Synopsys; and Tien Shiah, senior manager for memory marketing at Samsung Electronics. What follows are excerpts of that conversation. Part ... » read more

Machine Learning Inferencing At The Edge


Ian Bratt, fellow in Arm's machine learning group, talks about why machine learning inferencing at the edge is so difficult, what are the tradeoffs, how to optimize data movement, how to accelerate that movement, and how it differs from developing other types of processors. » read more

The Next New Memories


Several next-generation memory types are ramping up after years of R&D, but there are still more new memories in the research pipeline. Today, several next-generation memories, such as MRAM, phase-change memory (PCM) and ReRAM, are shipping to one degree or another. Some of the next new memories are extensions of these technologies. Others are based on entirely new technologies or involve ar... » read more

Using Memory Differently To Boost Speed


Boosting memory performance to handle a rising flood of data is driving chipmakers to explore new memory types and different ways of using existing memory, but it also is creating some complex new challenges. For most of the semiconductor design industry, memory has been a non-issue for the past couple of decades. The main concerns were price and size, but memory makers have been more than a... » read more

Will In-Memory Processing Work?


The cost associated with moving data in and out of memory is becoming prohibitive, both in terms of performance and power, and it is being made worse by the data locality in algorithms, which limits the effectiveness of cache. The result is the first serious assault on the von Neumann architecture, which for a computer was simple, scalable and modular. It separated the notion of a computatio... » read more

Inferencing Efficiency


Geoff Tate, CEO of Flex Logix, talks with Semiconductor Engineering about how to measure efficiency in inferencing chips, how to achieve the most throughput for the lowest cost, and what the benchmarks really show. » read more

Memory Options And Tradeoffs


Steven Woo, Rambus fellow and distinguished inventor, talks with Semiconductor Engineering about different memory options, why some are better than others for certain tasks, and what the tradeoffs are between the different memory types and architectures.     Related Articles/Videos Memory Tradeoffs Intensify In AI, Automotive Applications Why choosing memories and archi... » read more

Do Large Batches Always Improve Neural Network Throughput?


Common benchmarks like ResNet-50 generally have much higher throughput with large batch sizes than with batch size =1. For example, the Nvidia Tesla T4 has 4x the throughput at batch=32 than when it is processing in batch=1 mode. Of course, larger batch sizes have a tradeoff: latency increases which may be undesirable in real-time applications. Why do larger batches increase throughput... » read more

← Older posts Newer posts →