Author's Latest Posts


GDDR7 Memory Supercharges AI Inference


GDDR7 is the state-of-the-art graphics memory solution with a performance roadmap of up to 48 Gigatransfers per second (GT/s) and memory throughput of 192 GB/s per GDDR7 memory device. The next generation of GPUs and accelerators for AI inference will use GDDR7 memory to provide the memory bandwidth needed for these demanding workloads. AI is two applications: training and inference. With tr... » read more

Memory Implications Of Gen AI In Gaming


The global gaming market across hardware, software and services is on track to exceed annual revenues of $500B in 2025.1 That’s bigger by an order of magnitude than the combination of movies and music. On the cutting edge of that enormous market is open world gaming, where the driving goal is to give players the freedom to do anything they can imagine in a coherent and immersive environment. ... » read more

DDR5 PMICs Enable Smarter, Power-Efficient Memory Modules


Power management has received increasing focus in microelectronic systems as the need for greater power density, efficiency and precision have grown apace. One of the important ongoing trends in service of these needs has been the move to localizing power delivery. To optimize system power, it’s best to deliver as high a voltage as possible to the endpoint where the power is consumed. Then a... » read more

Scaling Server Memory Performance To Meet The Demands Of AI


AI, whether we’re talking about the number of parameters used in training or the size of large language models (LLMs), continues to grow at a breathtaking rate. For over a decade, we’ve witnessed a 10X per year scaling. It’s a growth rate that puts pressure on every aspect of the computing stack: processing, memory, networking, you name it. The platform vendors are responding to the in... » read more

The Power Of HBM3 Memory For AI Training Hardware


AI training data sets are constantly growing, driving the need for hardware accelerators capable of handling terabyte-scale bandwidth. Among the array of memory technologies available, High Bandwidth Memory (HBM) has emerged as the memory of choice for AI training hardware, with the most recent generation, HBM3, delivering unrivaled memory bandwidth. Let’s take a closer look at this important... » read more

Memory Technologies Key To Advancing AI Applications


Memory is an integral component in every computer system, from the smartphones in our pockets to the giant data centers powering the world’s leading-edge AI applications. As AI continues to rise in reach and complexity, the demand for more memory from data center to endpoints is reshaping the industry’s requirements and traditional approaches to memory architectures. According to OpenAI,... » read more

A Sea Change In Signaling With PCIe 6.0


PCI Express (PCIe) is one of those standards from the PC world, like Ethernet, that has proliferated far beyond its original application space. Thanks to its utility and economies of scale, PCIe has found a place in applications in IoT, automotive, test and measurement, medical, and more. As it has scaled, PCIe has pushed NRZ signaling to higher and higher levels reaching 32 gigatransfers per s... » read more

Advancing Signaling Rates To 64 GT/s With PCI Express 6.0


From the introduction of PCI Express 3.0 (PCIe 3.0) in 2010 onward, each new generation of the standard has offered double the signaling rate of its predecessor. PCIe 3.0 saw a significant change to the protocol with the move from 8b/10b to highly efficient 128b/130b encoding. The PCIe 6.0 specification, now officially released, doubles the signaling rate to 64 gigatransfers per second (GT/s) a... » read more

CXL Signals A New Era Of Data Center Architecture


An exponential rise in data volume and traffic across the global internet infrastructure is motivating exploration of new architectures for the data center. Disaggregation and composability would move us beyond the classic architecture of the server as the unit of computing. By separating the functional components of compute, memory, storage and networking into pools, composed on-demand to matc... » read more

CXL: Sorting Out The Interconnect Soup


In the webinar Hidden Signals: Memory and Interconnect Decisions for AI, IoT and 5G, Shane Rau of IDC and Rambus Fellow Steven Woo discussed how interconnects were a critical enabling technology for future computing platforms. One of the major complications was the “interconnect soup” of numerous and divergent interface protocols. The Compute Express Link (CXL) standard offers to sort out m... » read more

← Older posts