The Power Of HBM3 Memory For AI Training Hardware


AI training data sets are constantly growing, driving the need for hardware accelerators capable of handling terabyte-scale bandwidth. Among the array of memory technologies available, High Bandwidth Memory (HBM) has emerged as the memory of choice for AI training hardware, with the most recent generation, HBM3, delivering unrivaled memory bandwidth. Let’s take a closer look at this important... » read more

Memory Technologies Key To Advancing AI Applications


Memory is an integral component in every computer system, from the smartphones in our pockets to the giant data centers powering the world’s leading-edge AI applications. As AI continues to rise in reach and complexity, the demand for more memory from data center to endpoints is reshaping the industry’s requirements and traditional approaches to memory architectures. According to OpenAI,... » read more

New Developments Set To Accelerate MIPI CSI-2 Adoption In Automotive


As Advanced Driver-Assistance Systems (ADAS) become more sophisticated, cars are equipped with an increasing number of cameras and sensors. To support features like automated parking, adaptive cruise control, and enhanced night vision, sensors source multiple wavelengths and deploy cameras with higher quality data formats, higher frame and refresh rates. ADAS systems are all powered by data sou... » read more

Using A Retimer To Extend Reach For PCIe 6.0 Designs


One of the biggest changes that came with PCIe 6.0 was the transition from non-return-to-zero (NRZ) signaling to PAM4 signaling. Pulse Amplitude Modulation (PAM) enables more bits to be transmitted at the same time on a serial channel. In PCIe 6.0, this translates to 2 bits per clock cycle for 4 amplitude levels (00, 01, 10, 11) vs. PCIe 5.0, and earlier generations, which used NRZ with 1 bit p... » read more

Generative AI Training With HBM3 Memory


One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

LPDDR5X: High Bandwidth, Power Efficient Performance For Mobile & Beyond


Looking back over recent history in the memory landscape, we can clearly see a trend of new applications growing sufficiently large enough to command the creation of new memory technologies tailored to their specific needs. We saw this with the creation of GDDR for graphics and later HBM for AI/ML applications. Low-Power Double Data Rate (LPDDR) emerged as a specialized memory designed for mobi... » read more

GDDR6 Delivers The Performance For AI/ML Inference


AI/ML is evolving at a lightning pace. Not a week goes by right now without some new and exciting developments in the field, and applications like ChatGPT have brought generative AI capabilities firmly to the forefront of public attention. AI/ML is really two applications: training and inference. Each relies on memory performance, and each has a unique set of requirements that drive the choi... » read more

DDR5 Memory Enables Next-Generation Computing


Computing main memory transitions may only happen once a decade, but when they do, it is a very exciting time in the industry. When JEDEC announced the publication of the JESD79-5 DDR5 SDRAM standard in 2021, it signaled the beginning of the transition to DDR5 server and client dual-inline memory modules (Server RDIMMs, Client UDIMMs and SODIMMs). We are now firmly on this path of enabling the ... » read more

Enabling New Server Architectures With The CXL Interconnect


The ever-growing demand for higher performance compute is motivating the exploration of new compute offload architectures for the data center. Artificial intelligence and machine learning (AI/ML) are just one example of the increasingly complex and demanding workloads that are pushing data centers to move away from the classic server computing architecture. These more demanding workloads can be... » read more

MIPI DSI-2 & VESA Video Compression Enable Next-Generation Displays


By Joseph Rodriguez and Simon Bussières It is hard to believe, but it has been 20 years since MIPI Alliance was first founded. The organization was originally formed to standardize the video interface technologies for cameras and displays in phones, with the MIPI acronym standing for Mobile Industry Processor Interface (MIPI). As the mobile industry has evolved, MIPI Alliance has evolved wi... » read more

← Older posts