Memory System Benchmarking, Simulation, And Application Profiling Via A Memory Stress Framework


A technical paper titled “A Mess of Memory System Benchmarking, Simulation and Application Profiling” was published by researchers at Barcelona Supercomputing Center, Unversitat Politecnica de Catalunya, and Micron Technology (Italy). Abstract: "The Memory stress (Mess) framework provides a unified view of the memory system benchmarking, simulation and application profiling. The Mess benc... » read more

HBM2E Raises The Bar For Memory Bandwidth


AI/ML training capabilities are growing at a rate of 10X per year driving rapid improvements in every aspect of computing hardware and software. HBM2E memory is the ideal solution for the high bandwidth requirements of AI/ML training, but entails additional design considerations given its 2.5D architecture. Designers can realize the full benefits of HBM2E memory with the silicon-proven memory s... » read more

HBM Takes On A Much Bigger Role


High-bandwidth memory is getting faster and showing up in more designs, but this stacked DRAM technology may play a much bigger role as a gateway for both chiplet-based SoCs and true 3D designs. HBM increasingly is being viewed as a way of pushing heterogenous distributed processing to a completely different level. Once viewed as an expensive technology that only could be utilized in the hig... » read more

HBM2E Raises The Bar For AI/ML Training


The largest AI/ML neural network training models now exceed an enormous 100 billion parameters. With the rate of growth over the last decade on a 10X annual pace, we’re headed to trillion parameter models in the not-too-distant future. Given the tremendous value that can be derived from AI/ML (it is mission critical to five of six of the top market cap companies in the world), there has been ... » read more

Pushing The Envelope With HBM2E Memory


In September, Rambus announced the achievement of reaching 4 gigabits per second (Gbps) operation with our HBM2E memory interface. This milestone was demonstrated in silicon and required mastering substantial signal integrity and power integrity (SI/PI) challenges. The 4 Gbps mark represents a 20% rise from the previous maximum data rate of 3.2 Gbps for HBM2E. To date, the industry’s faste... » read more

Scaling AI/ML Training Performance With HBM2E Memory


In my April SemiEngineering Low Power-High Performance blog, I wrote: “Today, AI/ML neural network training models can exceed 10 billion parameters, soon it will be over 100 billion.” “Soon” didn’t take long to arrive. At the end of May, OpenAI unveiled a new 175-billion parameter GPT-3 language model. This represented a more that 100X jump over the size of GPT-2’s 1.5 billion param... » read more

Memory Access In AI Systems


Memory access is a key consideration in AI system design. Ron Lowman, strategic marketing manager for IP at Synopsys, talks about how memory affects overall power consumption, why partitioning of on-chip and off-chip is so critical to performance and power, and how this changes from the cloud to the edge. » read more

High-Speed SerDes At 7/5nm


Manmeet Walia, senior product marketing manager at Synopsys, talks with Semiconductor Engineering about how to optimize PHYs for integration on all four corners of an SoC, as well as the PPA implications of moving large amounts of data across and around a chip. » read more

Ensuring HBM Reliability


Igor Elkanovich, CTO of GUC, and Evelyn Landman, CTO of proteanTecs, talk with Semiconductor Engineering about difficulties that crop up in advanced packaging, what’s redundant and what is not when using high-bandwidth memory, and how continuous in-circuit monitoring can identify potential problems before they happen. » read more

2.5D Architecture Answers AI Training’s Call for “All of the Above”


The impact of AI/ML grows daily impacting every industry and touching the lives of everyone. In marketing, healthcare, retail, transportation, manufacturing and more, AI/ML is a catalyst for great change. This rapid advance is powerfully illustrated by the growth in AI/ML training capabilities which have since 2012 grown by a factor of 10X every year. Today, AI/ML neural network training mod... » read more

← Older posts