Tradeoffs To Improve Performance, Lower Power


Generic chips are no longer acceptable in competitive markets, and the trend is growing as designs become increasingly heterogeneous and targeted to specific workloads and applications. From the edge to the cloud, including everything from vehicles, smartphones, to commercial and industrial machinery, the trend increasingly is on maximizing performance using the least amount of energy. This ... » read more

HBM2E Raises The Bar For AI/ML Training


The largest AI/ML neural network training models now exceed an enormous 100 billion parameters. With the rate of growth over the last decade on a 10X annual pace, we’re headed to trillion parameter models in the not-too-distant future. Given the tremendous value that can be derived from AI/ML (it is mission critical to five of six of the top market cap companies in the world), there has been ... » read more

Changing The Rules For Chip Scaling


Aki Fujimura, CEO of D2S, talks with Semiconductor Engineering about the incessant drive for chip density, how to improve that density through other means than just scaling, and why this is so important for the chip industry. » read more

Using 5nm Chips And Advanced Packages In Cars


Semiconductor Engineering sat down to discuss the impact of advanced node chips and advanced packaging on automotive reliability with Jay Rathert, senior director of strategic collaborations at KLA; Dennis Ciplickas, vice president of advanced solutions at PDF Solutions; Uzi Baruch, vice president and general manager of the automotive business unit at OptimalPlus; Gal Carmel, general manager of... » read more

The Other Side Of AI System Reliability


Adding intelligence into pervasive electronics will have consequences, but not necessarily what most people expect. Nearly everything electronic these days has some sort of "smart" functionality built in or added on. This can be as simple as a smoke alarm that alerts you when the batteries are running low, a home assistant that learns your schedule and dials the thermostat up or down, or a r... » read more

Making Sure AI/ML Works In Test Systems


Artificial intelligence/machine learning is being utilized increasingly to find patterns and outlier data in chip manufacturing and test, improving the overall yield and reliability of end devices. But there are too many variables and unknowns to reliably predict how a chip will behave in the field using just AI. Today, every AI use case — whether a self-driving car or an industrial sortin... » read more

The Best AI Edge Inference Benchmark


When evaluating the performance of an AI accelerator, there’s a range of methodologies available to you. In this article, we’ll discuss some of the different ways to structure your benchmark research before moving forward with an evaluation that directly runs your own model. Just like when buying a car, research will only get you so far before you need to get behind the wheel and give your ... » read more

Making Sense Of New Edge-Inference Architectures


New edge-inference machine-learning architectures have been arriving at an astounding rate over the last year. Making sense of them all is a challenge. To begin with, not all ML architectures are alike. One of the complicating factors in understanding the different machine-learning architectures is the nomenclature used to describe them. You’ll see terms like “sea-of-MACs,” “systolic... » read more

Firmware Skills Shortage


Good hardware without good software is a waste of silicon, but with so many new processors and accelerator architectures being created, and so many new skills required, companies are finding it hard to hire enough engineers with low-level software expertise to satisfy the demand. Writing compilers, mappers and optimization software does not have the same level of pizazz as developing new AI ... » read more

Timing Challenges In The Age Of AI Hardware


In recent years, we have seen a clear market trend towards dedicated integrated circuits (ASICs) that are much more efficient in performance and energy consumption than traditional general-purpose computers for processing AI workloads. These AI accelerators harden deep learning algorithm kernels into circuits, enable higher data ingestion bandwidth with local memory, and perform massively paral... » read more

← Older posts Newer posts →