Blog Review: March 18

Buried power rails at 3nm; AI performance; zero defects for autos.

popularity

Arm’s Divya Prasad investigates whether power rails that are buried below the BEOL metal stack and back-side power delivery can help alleviate some of the major physical design challenges facing 3nm nodes and beyond.

Rambus’ Steven Woo takes a look at a Roofline model for analyzing machine learning applications that illustrates how AI applications perform on Google’s tensor processing unit (TPU), NVIDIA’s K80 GPU and Intel’s Haswell CPU.

In a video, Mentor’s Colin Walls explains key factors to consider when selecting a processor for embedded applications.

A Synopsys writer highlights some of the changes coming with DDR5, namely higher speeds with a smaller footprint and better power efficiency due to a reduction in voltage requirement.

Cadence’s Paul McLellan considers exponential growth rates and Moore’s Law.

SEMI’s Serena Brischetto chats with Antoine Amade of Entegris about why zero defects will become the new safety standard for automotive electronics, shifts in defect control strategy, and how the industry can collaborate to reach that goal.

Ansys’ Ellen Meeks explains the basics of chemical kinetics, which describes the speed at which chemical species transform into new substances by breaking and reforming their molecular bonds, and why it’s important to product design.

Intrinsix’s Kathiravan Krishnamurthi finds that while managing signal integrity is increasingly important, special techniques are needed to maintain the integrity of microwave and millimeter-wave signals on chip.

Nvidia’s Isha Salian highlights a startup using natural language processing to narrow down the number of clinical trials a cancer patient may be eligible for, rather than presenting an overwhelming list of hundreds.

For more good blogs, check out those featured in the latest Low Power-High Performance newsletter:

Editor In Chief Ed Sperling points to a dearth of data about AI power efficiency.

Fraunhofer’s Andy Heinig shows why system architects need to be able to compare different design variants at an early stage with a high level of abstraction.

Rambus’ Frank Ferro argues that the ability of HBM to achieve tremendous memory bandwidth in a small footprint outweighs the added cost and complexity for training hardware.

Cadence’s Tyler Lockman examines how package layout mirroring works differently for some components, and why it matters.

Synopsys’ Manuel Mota delves into why multi-chip module packaging for HPC, Ethernet, and AI SoCs demands low-latency, high-throughput PHYs.

Moortec’s Stephen Crosher explains why as manufacturing variability increases at advanced nodes, monitoring what’s happening inside a chip can give better control.

Mentor’s Ahmed Ramadan and Greg Curtis, Samsung’s Harrison Lee and Jongwook Kye, and Qualcomm’s Sorin Dobre researched how the Open Model Interface enables a simulator-agnostic way to perform the increasingly important task of modeling aging.

Adesto’s Tommy Mullane makes the case that while some designs need flexibility, many get more benefit from custom solutions.

Arm’s Steve Winburn predicts that the ability to render a game in the cloud and deliver the visual output to any screen will provide a big boost to mobile gaming.

ANSYS’ Wim Slagter addresses misconceptions about the complexity, cost, and security of using simulation on cloud and HPC.



Leave a Reply


(Note: This name will be displayed publicly)