How Much Power Will AI Chips Use?


AI and machine learning have voracious appetites when it comes to power. On the training side, they will fully utilize every available processing element in a highly parallelized array of processors and accelerators. And on the inferencing side they, will continue to optimize algorithms to maximize performance for whatever task a system is designed to do. But as with cars, mileage varies gre... » read more

HBM2E Memory: A Perfect Fit For AI/ML Training


Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 (10X annually), driving rapid improvements in every aspect of computing hardware and software. Memory bandwidth is one such critical area of focus enabling the continued growth of AI. Introduced in 2013, High Bandwidth Memo... » read more

Thinking About AI Power In Parallel


Most AI chips being developed today run highly parallel series of multiply/accumulate (MAC) operations. More processors and accelerators equate to better performance. This is why it's not uncommon to see chipmakers stitching together multiple die that are larger than a single reticle. It's also one of the reasons so much attention is being paid to moving to the next process node. It's not ne... » read more

Defining And Improving AI Performance


Many companies are developing AI chips, both for training and for inference. Although getting the required functionality is important, many solutions will be judged by their performance characteristics. Performance can be measured in different ways, such as number of inferences per second or per watt. These figures are dependent on a lot of factors, not just the hardware architecture. The optim... » read more

Die-To-Die Connectivity


Manmeet Walia, senior product marketing manager at Synopsys, talks with Semiconductor Engineering about how die-to-die communication is changing as Moore’s Law slows down, new use cases such as high-performance computing, AI SoCs, optical modules, and where the tradeoffs are for different applications.   Interested in more Semiconductor Engineering videos? Sign-up for our YouTu... » read more

Why Standard Memory Choices Are So Confusing


System architects increasingly are developing custom memory architectures based upon specific use cases, adding to the complexity of the design process even though the basic memory building blocks have been around for more than half a century. The number of tradeoffs has skyrocketed along with the volume of data. Memory bandwidth is now a gating factor for applications, and traditional memor... » read more

GDDR6 Drilldown: Applications, Tradeoffs And Specs


Frank Ferro, senior director of product marketing for IP cores at Rambus, drills down on tradeoffs in choosing different DRAM versions, where GDDR6 fits into designs versus other types of DRAM, and how different memories are used in different vertical markets. » read more

AI’s Impact On Power And Performance


AI/ML is creeping into everything these days. There are AI chips, and there are chips that include elements of AI, particularly for inferencing. The big question is how well they will affect performance and power, and the answer isn't obvious. There are two main phases of AI, the training and the inferencing. Almost all training is done in the cloud using extremely large data sets. In fact, ... » read more

Using Machine Learning To Break Down Silos


Jeff David, vice president of AI solutions at PDF Solutions, talks with Semiconductor Engineering about where machine learning can be applied into semiconductor manufacturing, how it can be used to break down silos around different process steps, how active learning works with human input to tune algorithms, and why it’s important to be able to choose different different algorithms for differ... » read more

Why Data Is So Difficult To Protect In AI Chips


Experts at the Table: Semiconductor Engineering sat down to discuss a wide range of hardware security issues and possible solutions with Norman Chang, chief technologist for the Semiconductor Business Unit at ANSYS; Helena Handschuh, fellow at Rambus, and Mike Borza, principal security technologist at Synopsys. What follows are excerpts of that conversation. The first part of this discussion ca... » read more

← Older posts Newer posts →