Author's Latest Posts


GDDR7: The Ideal Memory Solution In AI Inference


The generative AI market is experiencing rapid growth, driven by the increasing parameter size of Large Language Models (LLMs). This growth is pushing the boundaries of performance requirements for training hardware within data centers. For an in-depth look at this, consider the insights provided in "HBM3E: All About Bandwidth". Once trained, these models are deployed across a diverse range of... » read more

HBM3E: All About Bandwidth


The rapid rise in size and sophistication of AI/ML training models requires increasingly powerful hardware deployed in the data center and at the network edge. This growth in complexity and data stresses the existing infrastructure, driving the need for new and innovative processor architectures and associated memory subsystems. For example, even GPT-3 at 175 billion parameters is stressing the... » read more

Generative AI Training With HBM3 Memory


One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

GDDR6 Delivers The Performance For AI/ML Inference


AI/ML is evolving at a lightning pace. Not a week goes by right now without some new and exciting developments in the field, and applications like ChatGPT have brought generative AI capabilities firmly to the forefront of public attention. AI/ML is really two applications: training and inference. Each relies on memory performance, and each has a unique set of requirements that drive the choi... » read more

Boosting Data Center Memory Performance In The Zettabyte Era With HBM3


We are living in the Zettabyte era, a term first coined by Cisco. Most of the world’s data has been created over the past few years and it is not set to slow down any time soon. Data has become not just big, but enormous! In fact, according to the IDC Global Datasphere 2022-2026 Forecast, the amount of data generated over the next 5 years will be at least 2x the amount of data generated over ... » read more

GDDR6 Memory Enables High-Performance AI/ML Inference


A rapid rise in the size and sophistication of inference models has necessitated increasingly powerful hardware deployed at the network edge and in endpoint devices. To keep these inference processors and accelerators fed with data requires a state-of-the-art memory that delivers extremely high bandwidth. This blog will explore how GDDR6 supports the memory and performance requirements of artif... » read more

High-Performance SerDes Enable The 5G Wireless Edge


Investment at the core of the global internet is red hot. The number of hyperscale data centers jumped to 700 worldwide at the end of 2021, and with more than 300 in the pipeline, should rise to over 1000 by 20241. In the span of five years, total hyperscale data centers will have doubled. And as the raw number shoots up, more powerful compute and networking hardware is rapidly being deployed, ... » read more

It’s Official: HBM3 Dons The Crown Of Bandwidth King


With the publishing of the HBM3 update to the High Bandwidth Memory (HBM) standard, a new king of bandwidth is crowned. The torrid performance demands of advanced workloads, with AI/ML training leading the pack, drive the need for ever faster delivery of bits. Memory bandwidth is a critical enabler of computing performance, thus the need for the accelerated evolution of the standard with HBM3 r... » read more

On the Road To Higher Memory Bandwidth


In the decade since HBM was first announced, we’ve seen two-and-a-half generations of the standard come to market. HBM’s “wide and slow” architecture debuted first at a data rate of 1 gigabit per second (Gbps) running over a 1024-bit wide interface. The product of that data rate and that interface width provided a bandwidth of 128 gigabytes per second (GB/s). In 2016, HBM2 doubled the s... » read more

GDDR6 Memory On The Leading Edge


With the accelerating growth in data traffic, it is unsurprising that the number of hyperscale data centers keeps rocketing skyward. According to analysts at the Synergy Research Group, in nine months (Q2’20 to Q1’21), 84 new hyperscale data centers came online bringing the total worldwide to 625. Hyperscaler capex set a record $150B over the last four quarters eclipsing the $121B spent in ... » read more

← Older posts