Evolving Edge Computing And Harnessing Heterogeneity


In the Evolving Edge Computing white paper, we highlighted 3 challenges to enable the Intelligent Edge, they are: Enabling hardware heterogeneity Removing development friction Ensuring security at scale This blog post examines the first in that list, heterogeneity. It will cover the ways in which heterogeneity appears, its effect on systems and some ideas for resolving its inher... » read more

Fantastical Creatures


In my day job I work in the High-Level Synthesis group at Siemens EDA, specifically focusing on algorithm acceleration. But on the weekends, sometimes, I take on the role of amateur cryptozoologist. As many of you know, the main Siemens EDA campus sits in the shadow of Mt. Hood and the Cascade Mountain range. This is prime habitat for Sasquatch, also known as “Bigfoot”. This weekend, ar... » read more

Challenges And Outlook Of ATE Testing For 2nm SoCs


The transition to the 2nm technology node introduces unprecedented challenges in Automated Test Equipment (ATE) bring-up and manufacturability. As semiconductor devices scale down, the complexity of testing and ensuring manufacturability increases exponentially. 3nm silicon is a mastered art now, with yields hitting pretty high for even complex packaged silicon, while the transition from 3nm to... » read more

Managing Complexity And A Left Shift: Reconfigurable Mixed-Signal Circuits For Complex Integrated Systems


By Björn Zeugmann and Benjamin Prautsch The chip market is growing worldwide; it’s projected to nearly double by 2030 to over one trillion dollars. Most of this market is made up of digital functions in the form of logic, microprocessors, and memory. Although analog ICs account for only around 15% of the total, they are key components for overall systems and are therefore almost always pr... » read more

Ensuring Multi-Die Package Quality And Reliability


Multi-die designs are gaining broader adoption in a wide variety of end applications, including high-performance computing, artificial intelligence (AI), automotive, and mobile. Despite clear advantages, there are new challenges that need to be addressed for successful multi-die realization. This article gives a high-level overview of the multi-die test challenges that go beyond the design p... » read more

Memory Implications Of Gen AI In Gaming


The global gaming market across hardware, software and services is on track to exceed annual revenues of $500B in 2025.1 That’s bigger by an order of magnitude than the combination of movies and music. On the cutting edge of that enormous market is open world gaming, where the driving goal is to give players the freedom to do anything they can imagine in a coherent and immersive environment. ... » read more

HBM3E: All About Bandwidth


The rapid rise in size and sophistication of AI/ML training models requires increasingly powerful hardware deployed in the data center and at the network edge. This growth in complexity and data stresses the existing infrastructure, driving the need for new and innovative processor architectures and associated memory subsystems. For example, even GPT-3 at 175 billion parameters is stressing the... » read more

Powering The Future Of Flight: Designing A Hydrogen-Powered eVTOL


A large, bustling city surrounds you — crowded streets and tall buildings reaching toward the sky. You’re late for an appointment. However, instead of hopping into a car, you look above for your ride: an electric vertical takeoff and landing (eVTOL) vehicle. It is a small aircraft that takes off and lands vertically like helicopters, uses sustainable electric propulsion systems, and is inte... » read more

ConvNext Runs 28X Faster Than Fallback


Two months ago in our blog we highlighted the fallacy of using a conventional NPU accelerator paired with a DSP or CPU for “fallback” operations. (Fallback Fails Spectacularly, May 2024). In that blog we calculated what the expected performance would be for a system with a DSP needing to perform the new operations found in one of today’s leading new ML networks – ConvNext. The result wa... » read more

224Gbps PHY For The Next Generation Of High Performance Computing


Large language models (LLMs) are experiencing an explosive growth in parameter count. Training these ever-larger models requires multiple accelerators to work together, and the bandwidth between these accelerators directly limits the size of trainable LLMs in High Performance Computing (HPC) environments. The correlation between the LLM size and data rates of interconnect technology herald a... » read more

← Older posts Newer posts →