GDDR6 Memory On The Leading Edge


With the accelerating growth in data traffic, it is unsurprising that the number of hyperscale data centers keeps rocketing skyward. According to analysts at the Synergy Research Group, in nine months (Q2’20 to Q1’21), 84 new hyperscale data centers came online bringing the total worldwide to 625. Hyperscaler capex set a record $150B over the last four quarters eclipsing the $121B spent in ... » read more

Accelerating AI/ML Inferencing With GDDR6 DRAM


The origins of graphics double data rate (GDDR) memory can be traced to the rise of 3D gaming on PCs and consoles. The first graphics processing units (GPU) packed single data rate (SDR) and double data rate (DDR) DRAM – the same solution used for CPU main memory. As gaming evolved, the demand for higher frame rates at ever higher resolutions drove the need for a graphics-workload specific me... » read more

GDDR6 Memory For Life On The Edge


With the torrid growth in data traffic, it is unsurprising that the number of hyperscale data centers has grown apace. According to analysts at the Synergy Research Group, in July of this year there were 541 hyperscale data centers worldwide. That represents a doubling in the number since 2015. Even more striking, there are an additional 176 in the pipeline, so the breakneck growth in hyperscal... » read more

From Data Center To End Device: AI/ML Inferencing With GDDR6


Created to support 3D gaming on consoles and PCs, GDDR packs performance that makes it an ideal solution for AI/ML inferencing. As inferencing migrates from the heart of the data center to the network edge, and ultimately to a broad range of AI-powered IoT devices, GDDR memory’s combination of high bandwidth, low latency, power efficiency and suitability for high-volume applications will be i... » read more

Scaling AI/ML Training Performance With HBM2E Memory


In my April SemiEngineering Low Power-High Performance blog, I wrote: “Today, AI/ML neural network training models can exceed 10 billion parameters, soon it will be over 100 billion.” “Soon” didn’t take long to arrive. At the end of May, OpenAI unveiled a new 175-billion parameter GPT-3 language model. This represented a more that 100X jump over the size of GPT-2’s 1.5 billion param... » read more

An Expanding Application Space For GDDR6 Memory


The origins of graphics double data rate (GDDR) memory can be traced to the rise of 3D gaming on PCs and consoles. The first GPUs used single data rate (SDR) and double data rate (DDR) DRAM, the same memory used for CPU main memory. The quest for higher frame rates at higher resolutions drove the need for a graphics-workload specific memory solution. The commercial success of gaming PCs and con... » read more

Power Impact At The Physical Layer Causes Downstream Effects


Data movement is rapidly emerging as one of the top design challenges, and it is being complicated by new chip architectures and physical effects caused by increasing density at advanced nodes and in multi-chip systems. Until the introduction of the latest revs of high-bandwidth memory, as well as GDDR6, memory was considered the next big bottleneck. But other compute bottlenecks have been e... » read more

AI Requires Tailored DRAM Solutions


For over 30 years, DRAM has continuously adapted to the needs of each new wave of hardware spanning PCs, game consoles, mobile phones and cloud servers. Each generation of hardware required DRAM to hit new benchmarks in bandwidth, latency, power or capacity. Looking ahead, the 2020s will be the decade of artificial intelligence/machine learning (AI/ML) touching every industry and applicatio... » read more

HBM2E and GDDR6: Memory Solutions for AI


Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 driving rapid improvements in every aspect of computing hardware and software. Meanwhile, AI inference is being deployed across the network edge and in a broad spectrum of IoT devices including in automotive/ADAS. Training and... » read more

Changes In AI SoCs


Kurt Shuler, vice president of marketing at ArterisIP, talks about the tradeoffs in AI SoCs, which range from power and performance to flexibility, depending on whether processing elements are highly specific or more general, and the need for more modeling of both hardware and software together. » read more

← Older posts Newer posts →