GDDR6 Memory For Life On The Edge


With the torrid growth in data traffic, it is unsurprising that the number of hyperscale data centers has grown apace. According to analysts at the Synergy Research Group, in July of this year there were 541 hyperscale data centers worldwide. That represents a doubling in the number since 2015. Even more striking, there are an additional 176 in the pipeline, so the breakneck growth in hyperscal... » read more

PCIe 5.0: A Key Interface Solution For The Evolving Data Center


A great many developments are shaping the evolution of the data center. Enterprise workloads are increasingly shifting to the cloud, whether these be hosted or colocation implementations. The nature of workload traffic is changing such that data centers are architected to manage greater east-west (within the data center) communication. New workloads, with AI/ML (artificial intelligence/machine ... » read more

Scaling AI/ML Training Performance With HBM2E Memory


In my April SemiEngineering Low Power-High Performance blog, I wrote: “Today, AI/ML neural network training models can exceed 10 billion parameters, soon it will be over 100 billion.” “Soon” didn’t take long to arrive. At the end of May, OpenAI unveiled a new 175-billion parameter GPT-3 language model. This represented a more that 100X jump over the size of GPT-2’s 1.5 billion param... » read more

An Expanding Application Space For GDDR6 Memory


The origins of graphics double data rate (GDDR) memory can be traced to the rise of 3D gaming on PCs and consoles. The first GPUs used single data rate (SDR) and double data rate (DDR) DRAM, the same memory used for CPU main memory. The quest for higher frame rates at higher resolutions drove the need for a graphics-workload specific memory solution. The commercial success of gaming PCs and con... » read more

Data Center Scaling Requires New Interface Architectures


You can pick your favorite data points, but the bottom line is global data traffic is growing at an exponential rate driven by a confluence of megatrends. 5G networks are making possible billions of AI-powered IoT devices untethered from wired networks. Machine learning’s voracious appetite for enormous data sets is skyrocketing. Data intensive video streaming for both entertainment and busin... » read more

Enabling Chiplet And Co-Packaged Optics Architectures With 112G XSR SerDes


Conventional chip designs are struggling to achieve the scalability, as well as power, performance, and area (PPA), that are demanded of leading-edge designs. With the slowing of Moore’s Law, high complexity ASICs increasingly bump up against reticle limits. The demise of Dennard scaling means power consumption is a growing challenge. In this context, disaggregated architectures such as chipl... » read more

2.5D Architecture Answers AI Training’s Call for “All of the Above”


The impact of AI/ML grows daily impacting every industry and touching the lives of everyone. In marketing, healthcare, retail, transportation, manufacturing and more, AI/ML is a catalyst for great change. This rapid advance is powerfully illustrated by the growth in AI/ML training capabilities which have since 2012 grown by a factor of 10X every year. Today, AI/ML neural network training mod... » read more

HBM2E Memory: A Perfect Fit For AI/ML Training


Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 (10X annually), driving rapid improvements in every aspect of computing hardware and software. Memory bandwidth is one such critical area of focus enabling the continued growth of AI. Introduced in 2013, High Bandwidth Memo... » read more

Enabling Integration Success Using High-Speed SerDes IP


By Niall Sorensen and Malini Narayanammoorthi Internet traffic volumes continue to grow at a breakneck pace, and the demands on SerDes speeds increase accordingly. High-speed SerDes play an integral part of the networking chain and these speed increases are required to support the bandwidth demands of artificial intelligence (AI), Internet of Things (IoT), virtual reality (VR) and many more ... » read more

Accelerating AI And ML Applications With PCIe 5


The rapid adoption of sophisticated artificial intelligence/machine learning (AI/ML) applications and the shift to cloud-based workloads has significantly increased network traffic in recent years. Historically, the intensive use of virtualization ensured that server compute capacity adequately met the need of heavy workloads. This was achieved by dividing or partitioning a single (physical) se... » read more

← Older posts Newer posts →