Research Bits: Dec. 13


Electronic-photonic interface for data centers Engineers at Caltech and the University of Southampton integrated an electronic and photonic chip for high-speed communication in data centers. "There are more than 2,700 data centers in the U.S. and more than 8,000 worldwide, with towers of servers stacked on top of each other to manage the load of thousands of terabytes of data going in and o... » read more

Boosting Data Center Memory Performance In The Zettabyte Era With HBM3


We are living in the Zettabyte era, a term first coined by Cisco. Most of the world’s data has been created over the past few years and it is not set to slow down any time soon. Data has become not just big, but enormous! In fact, according to the IDC Global Datasphere 2022-2026 Forecast, the amount of data generated over the next 5 years will be at least 2x the amount of data generated over ... » read more

Profile-Guided HW/SW Mechanism To Efficiently Reduce Branch Mispredictions In Data Center Applications (Best Paper Award)


A new technical paper titled "Whisper: Profile-Guided Branch Misprediction Elimination for Data Center Applications" was published by researchers at University of Michigan, ARM, University of California, Santa Cruz, and Texas A&M University. This work was awarded a best paper award at October's 2022 Institute of Electrical and Electronics Engineers (IEEE)/Association for Computing Machin... » read more

Ensuring Data Integrity And Performance Of High-Speed Data Transmission


In key electronics applications such as data centers, automotive, and 5G, the data speed and volume are increasing at an exponential rate. Data centers require data transmission (Figure 1) as high as 112Gbps, which can be achieved only using PAM4 signaling. The automotive industry is dealing with the challenges of transferring data between various electronic control units (ECUs) at a very high ... » read more

The Data Center Journey, From Central Utility To Center Of The Universe


High-performance computing (HPC) has taken on many meanings over the years. The primary goal of HPC is to provide the needed computational power to run a data center – a utilitarian facility dedicated to storing, processing, and distributing data. The beginning of HPC Historically, the data being processed was the output of business operations for a given organization. Transactions, custome... » read more

Digital Power Sets The Direction For Data Center Growth In The AI Era


Data center power requirements are often in the news given the insatiable need of artificial intelligence, 5G wireless networks, and the Internet of Things to process rapidly rising data volumes. With the cloud storage market expanding at an estimated 20-plus percent annually, the world’s largest hyperscale computing companies are taking advantage of this growth to offer their customers a ... » read more

Compute Express Link (CXL): All You Need To Know


An in-depth look at Compute Express Link  (CXL) 2.0, an open standard cache-coherent interconnect between processors and accelerators, smart NICs, and memory devices. We explore how CXL is helping data centers more efficiently handle the yottabytes of data generated by artificial intelligence (AI) and machine learning (ML) applications. We discuss how CXL technology maintains memory c... » read more

Adaptive Clocking: Minding Your P-States And C-States


Larger processor arrays are here to stay for AI and cloud applications. For example, Ampere offers a 128-core behemoth for hyperscalers (mainly Oracle), while Esperanto integrates almost 10x more cores for AI workloads. However, power management becomes increasingly important with these arrays, and system designers need to balance dynamic power with system latency. As we march year over year, t... » read more

Evolution Of Data Center Networking Technology — IP And Beyond


Ethernet is ubiquitous—it is the core technology that defines the Internet and serves to connect the world in ways that people could not imagine even one generation ago. HPC clusters are working on solving the most challenging problems facing humanity—and cloud computing is the service hosting many of the application workloads struggling with these questions. While alternative network infra... » read more

Data Center Evolution: The Leap To 64 GT/s Signaling With PCI Express 6.0


The PCI Express (PCIe) interface is the critical backbone that moves data at high bandwidth and low latency between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. With the torrid rise in bandwidth demands of advanced workloads such as AI/ML training, PCIe 6.0 jumps signaling to 64 GT/s with some of the biggest changes yet in the standard. Download this w... » read more

← Older posts Newer posts →