Power Challenges In ML Processors


The design of artificial intelligence (AI) chips or machine learning (ML) systems requires that designers and architects use every trick in the book and then learn some new ones if they are to be successful. Call it style, call it architecture, there are some designs that are just better than others. When it comes to power, there are plenty of ways that small changes can make large differences.... » read more

Thinking About AI Power In Parallel


Most AI chips being developed today run highly parallel series of multiply/accumulate (MAC) operations. More processors and accelerators equate to better performance. This is why it's not uncommon to see chipmakers stitching together multiple die that are larger than a single reticle. It's also one of the reasons so much attention is being paid to moving to the next process node. It's not ne... » read more

Software In Inference Accelerators


Geoff Tate, CEO of Flex Logix, talks about the importance of hardware-software co-design for inference accelerators, how that affects performance and power, and what new approaches chipmakers are taking to bring AI chips to market. » read more

Things That Go Bump In The Daytime


There is no argument that autonomous technology is better at certain things than systems controlled by people. A computer-guided system has only one mission — to stay on the road, avoid object, and reach the end destination. It doesn't get tired, text, or look out the window. And it can park within a millimeter of a wall or another vehicle without hitting it, and do that every time — as lon... » read more

Defining And Improving AI Performance


Many companies are developing AI chips, both for training and for inference. Although getting the required functionality is important, many solutions will be judged by their performance characteristics. Performance can be measured in different ways, such as number of inferences per second or per watt. These figures are dependent on a lot of factors, not just the hardware architecture. The optim... » read more

Die-To-Die Connectivity


Manmeet Walia, senior product marketing manager at Synopsys, talks with Semiconductor Engineering about how die-to-die communication is changing as Moore’s Law slows down, new use cases such as high-performance computing, AI SoCs, optical modules, and where the tradeoffs are for different applications.   Interested in more Semiconductor Engineering videos? Sign-up for our YouTu... » read more

Why Standard Memory Choices Are So Confusing


System architects increasingly are developing custom memory architectures based upon specific use cases, adding to the complexity of the design process even though the basic memory building blocks have been around for more than half a century. The number of tradeoffs has skyrocketed along with the volume of data. Memory bandwidth is now a gating factor for applications, and traditional memor... » read more

GDDR6 Drilldown: Applications, Tradeoffs And Specs


Frank Ferro, senior director of product marketing for IP cores at Rambus, drills down on tradeoffs in choosing different DRAM versions, where GDDR6 fits into designs versus other types of DRAM, and how different memories are used in different vertical markets. » read more

Bolstering Security For AI Applications


Hardware accelerators that run sophisticated artificial intelligence (AI) and machine learning (ML) algorithms have become increasingly prevalent in data centers and endpoint devices. As such, protecting sensitive and lucrative data running on AI hardware from a range of threats is now a priority for many companies. Indeed, a determined attacker can either manipulate or steal training data, inf... » read more

Using Multiple Inferencing Chips In Neural Networks


Geoff Tate, CEO of Flex Logix, talks about what happens when you add multiple chips in a neural network, what a neural network model looks like, and what happens when it’s designed correctly vs. incorrectly. » read more

← Older posts Newer posts →