Author's Latest Posts

Generative AI Training With HBM3 Memory

One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

GDDR6 Delivers The Performance For AI/ML Inference

AI/ML is evolving at a lightning pace. Not a week goes by right now without some new and exciting developments in the field, and applications like ChatGPT have brought generative AI capabilities firmly to the forefront of public attention. AI/ML is really two applications: training and inference. Each relies on memory performance, and each has a unique set of requirements that drive the choi... » read more

Boosting Data Center Memory Performance In The Zettabyte Era With HBM3

We are living in the Zettabyte era, a term first coined by Cisco. Most of the world’s data has been created over the past few years and it is not set to slow down any time soon. Data has become not just big, but enormous! In fact, according to the IDC Global Datasphere 2022-2026 Forecast, the amount of data generated over the next 5 years will be at least 2x the amount of data generated over ... » read more

GDDR6 Memory Enables High-Performance AI/ML Inference

A rapid rise in the size and sophistication of inference models has necessitated increasingly powerful hardware deployed at the network edge and in endpoint devices. To keep these inference processors and accelerators fed with data requires a state-of-the-art memory that delivers extremely high bandwidth. This blog will explore how GDDR6 supports the memory and performance requirements of artif... » read more

High-Performance SerDes Enable The 5G Wireless Edge

Investment at the core of the global internet is red hot. The number of hyperscale data centers jumped to 700 worldwide at the end of 2021, and with more than 300 in the pipeline, should rise to over 1000 by 20241. In the span of five years, total hyperscale data centers will have doubled. And as the raw number shoots up, more powerful compute and networking hardware is rapidly being deployed, ... » read more

It’s Official: HBM3 Dons The Crown Of Bandwidth King

With the publishing of the HBM3 update to the High Bandwidth Memory (HBM) standard, a new king of bandwidth is crowned. The torrid performance demands of advanced workloads, with AI/ML training leading the pack, drive the need for ever faster delivery of bits. Memory bandwidth is a critical enabler of computing performance, thus the need for the accelerated evolution of the standard with HBM3 r... » read more

On the Road To Higher Memory Bandwidth

In the decade since HBM was first announced, we’ve seen two-and-a-half generations of the standard come to market. HBM’s “wide and slow” architecture debuted first at a data rate of 1 gigabit per second (Gbps) running over a 1024-bit wide interface. The product of that data rate and that interface width provided a bandwidth of 128 gigabytes per second (GB/s). In 2016, HBM2 doubled the s... » read more

GDDR6 Memory On The Leading Edge

With the accelerating growth in data traffic, it is unsurprising that the number of hyperscale data centers keeps rocketing skyward. According to analysts at the Synergy Research Group, in nine months (Q2’20 to Q1’21), 84 new hyperscale data centers came online bringing the total worldwide to 625. Hyperscaler capex set a record $150B over the last four quarters eclipsing the $121B spent in ... » read more

Accelerating AI/ML Inferencing With GDDR6 DRAM

The origins of graphics double data rate (GDDR) memory can be traced to the rise of 3D gaming on PCs and consoles. The first graphics processing units (GPU) packed single data rate (SDR) and double data rate (DDR) DRAM – the same solution used for CPU main memory. As gaming evolved, the demand for higher frame rates at ever higher resolutions drove the need for a graphics-workload specific me... » read more

HBM2E Raises The Bar For AI/ML Training

The largest AI/ML neural network training models now exceed an enormous 100 billion parameters. With the rate of growth over the last decade on a 10X annual pace, we’re headed to trillion parameter models in the not-too-distant future. Given the tremendous value that can be derived from AI/ML (it is mission critical to five of six of the top market cap companies in the world), there has been ... » read more

← Older posts