HBM Options Increase As AI Demand Soars


High-bandwidth memory (HBM) sales are spiking as the amount of data that needs to be processed quickly by state-of-the-art AI accelerators, graphic processing units, and high-performance computing applications continues to explode. HBM inventories are sold out, driven by massive efforts and investments in developing and improving large language models such as ChatGPT. HBM is the memory of ch... » read more

Extending The DDR5 Roadmap With MRDIMM


Given the voracious memory bandwidth and capacity demands of Gen AI and other advanced workloads, we’ve seen a rapid progression through the generations of DDR5 memory. Multiplexed Registered DIMMs (MRDIMMs) offer a new memory module architecture capable of extending the DDR5 roadmap and expanding the capabilities of server main memory. MRDIMM reuses the lion’s share of existing DDR5 infras... » read more

Managing The Huge Power Demands Of AI Everywhere


Before generative AI burst onto the scene, no one predicted how much energy would be needed to power AI systems. Those numbers are just starting to come into focus, and so is the urgency about how to sustain it all. AI power demand is expected to surge 550% by 2026, from 8 TWh in 2024 to 52 TWh, before rising another 1,150% to 652 TWh by 2030. Commensurately, U.S. power grid planners have do... » read more

Providing Line-Rate Network Security With MACsec


Network security protocols are the primary means of securing data in motion; that is, data communicated between closely connected physical devices or between devices and even virtual machines connected using a complex, geographically distributed infrastructure. This blog will explore Media Access Control security (MACsec) and how it can be used to provide foundational level network security for... » read more

Automotive OEMs Focus On SDVs, Zonal Architectures


Giant automotive OEMs are re-evaluating how quickly to move to advanced technologies and software-driven designs amid crushing financial pressure from low-cost EVs developed in other markets such as China. U.S., European, and Japanese OEMs have been struggling for the past half-decade or so to figure out which is the best approach to developing EVs, undergoing multiple shifts in both hardwar... » read more

AI Drives IC Design Shifts At The Edge


The increasing adoption of AI in edge devices, coupled with a growing demand for new features, is forcing chipmakers to rethink when and where data gets processed, what kind of processors to use, and how to build enough flexibility into systems to span multiple markets. Unlike in the cloud, where the solution generally involves nearly unlimited resources, computing at the edge has sharp cons... » read more

Chip Industry Week In Review


Siemens announced plans to acquire Altair Engineering, a provider of industrial simulation and analysis, data science, and high-performance computing (HPC) software, for about $10 billion. Altair's software will become part of Siemens' Xcelerator portfolio and provide a boost to physics-based digital twins. Onto Innovation bought Lumina Instruments, a San Jose, California-based maker of lase... » read more

Chip Industry Week In Review


Arm joined forces with Korea's Samsung Foundry, ADTechnology, and Rebellions to create a CPU chiplet platform for AI training and inference. The new chiplet will be based on Samsung's 2nm gate-all-around technology. Intel and AMD, arch competitors for decades, formed an x86 ecosystem advisory group to collaborate on architectural interoperability and simplify software development. Samsung... » read more

GDDR7 Memory Supercharges AI Inference


GDDR7 is the state-of-the-art graphics memory solution with a performance roadmap of up to 48 Gigatransfers per second (GT/s) and memory throughput of 192 GB/s per GDDR7 memory device. The next generation of GPUs and accelerators for AI inference will use GDDR7 memory to provide the memory bandwidth needed for these demanding workloads. AI is two applications: training and inference. With tr... » read more

Mass Customization For AI Inference


Rising complexity in AI models and an explosion in the number and variety of networks is leaving chipmakers torn between fixed-function acceleration and more programmable accelerators, and creating some novel approaches that include some of both. By all accounts, a general-purpose approach to AI processing is not meeting the grade. General-purpose processors are exactly that. They're not des... » read more

← Older posts