UCIe For 1.6T Interconnects In Next-Gen I/O Chiplets For AI Data Centers


The rise of generative AI is pushing the limits of computing power and high-speed communication, posing serious challenges as it demands unprecedented workloads and resources. No single design can be optimized for the different classes of models – whether the focus is on compute, memory bandwidth, memory capacity, network bandwidth, latency sensitivity, or scale, all of which are affected by ... » read more

AI In Data Management Has Limits


AI algorithms are being integrated into a growing number of EDA tools to automate different aspects of data management, but they also are forcing discussions about just how much decision-making should be turned over to machines and when that should happen. The ability of AI to sort through enormous amounts of design data to find patterns, both good and bad, is well recognized at this point. ... » read more

Gold In The Machine: Scaling Infrastructure For The Age Of AI


During the gold rush, hopeful prospectors flooded the west to make their fortunes in gold. Today, technology pioneers are looking to stake their claim in the realm of artificial intelligence (AI). Price Waterhouse Cooper (PWC) estimates that 45% of total global economic gains by 2030 will be driven by AI as more sectors embrace the productivity and product enhancement benefits of AI. PWC’s ... » read more

AI Infrastructure At A Crossroads


By Ramin Farjadrad and Syrus Ziai There is a big push to achieve greater scale, performance and sustainability to fuel the AI revolution. More speed, more memory bandwidth, less power — these are the holy grails. Naturally, the one-two punch of StarGate and DeepSeek last week has raised many questions in our ecosystem and with our various stakeholders. Can DeepSeek be real? And if so, w... » read more

DeepSeek: Improving Language Model Reasoning Capabilities Using Pure Reinforcement Learning


A new technical paper titled "DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning" was published by DeepSeek. Abstract: "We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates rema... » read more

The Road To Super Chips


Reticle size limitations are forcing chip design teams to look beyond a single SoC or processor in order to achieve orders of magnitude improvements in processing that are required for AI. But moving data between more processing elements adds a whole new set of challenges that need to be addressed at multiple levels. Steve Woo, distinguished inventor and fellow at Rambus, examines the benefits ... » read more

Chip Industry Week In Review


The new Trump administration was quick to put a different stamp on the tech world: President Trump rescinded a long list of Biden’s executive orders, including those aimed at AI safety and the mandate for 50% EVs by 2030. Roughly 1.3 million EVs were sold in the U.S. in 2024, up 7.3% from 2023. The new administration announced $500 billion ($100 billion initially) in private sector in... » read more

Automotive OEMs Face Multiple Technology Adoption Challenges


Experts At The Table: The automotive ecosystem is in the midst of significant change. OEMs and tiered providers are grappling with how to deal with legacy technology while incorporating ever-increasing levels of autonomy, electrification, and software-defined vehicle concepts, just to name a few. Semiconductor Engineering sat down to discuss these and other related issues with Wayne Lyons, seni... » read more

How Ultra Ethernet And UALink Enable High-Performance, Scalable AI Networks


By Ron Lowman and Jon Ames AI workloads are significantly driving innovation in the interface IP market. The exponential increase in AI model parameters, doubling approximately every 4-6 months, stands in stark contrast to the slower pace of hardware advancements dictated by Moore's Law, which follows an 18-month cycle. This discrepancy demands hardware innovations to support AI workloads, c... » read more

Choosing The Right Memory Solution For AI Accelerators


To meet the increasing demands of AI workloads, memory solutions must deliver ever-increasing performance in bandwidth, capacity, and efficiency. From the training of massive large language models (LLMs) to efficient inference on endpoint devices, choosing the right memory technology is critical for chip designers. This blog explores three leading memory solutions—HBM, LPDDR, and GDDR—and t... » read more

← Older posts Newer posts →