Why Standard Memory Choices Are So Confusing


System architects increasingly are developing custom memory architectures based upon specific use cases, adding to the complexity of the design process even though the basic memory building blocks have been around for more than half a century. The number of tradeoffs has skyrocketed along with the volume of data. Memory bandwidth is now a gating factor for applications, and traditional memor... » read more

GDDR6 Drilldown: Applications, Tradeoffs And Specs


Frank Ferro, senior director of product marketing for IP cores at Rambus, drills down on tradeoffs in choosing different DRAM versions, where GDDR6 fits into designs versus other types of DRAM, and how different memories are used in different vertical markets. » read more

Bolstering Security For AI Applications


Hardware accelerators that run sophisticated artificial intelligence (AI) and machine learning (ML) algorithms have become increasingly prevalent in data centers and endpoint devices. As such, protecting sensitive and lucrative data running on AI hardware from a range of threats is now a priority for many companies. Indeed, a determined attacker can either manipulate or steal training data, inf... » read more

Using Multiple Inferencing Chips In Neural Networks


Geoff Tate, CEO of Flex Logix, talks about what happens when you add multiple chips in a neural network, what a neural network model looks like, and what happens when it’s designed correctly vs. incorrectly. » read more

Why Data Is So Difficult To Protect In AI Chips


Experts at the Table: Semiconductor Engineering sat down to discuss a wide range of hardware security issues and possible solutions with Norman Chang, chief technologist for the Semiconductor Business Unit at ANSYS; Helena Handschuh, fellow at Rambus, and Mike Borza, principal security technologist at Synopsys. What follows are excerpts of that conversation. The first part of this discussion ca... » read more

Memory Subsystems In Edge Inferencing Chips


Geoff Tate, CEO of Flex Logix, talks about key issues in a memory subsystem in an inferencing chip, how factors like heat can affect performance, and where these kinds of chips will be used. » read more

Making Better Use Of Memory In AI


Steven Woo, Rambus fellow and distinguished inventor, talks about using number formats to extend memory bandwidth, what the impact can be on fractional precision, how modifications of precision can play into that without sacrificing accuracy, and what role stochastic rounding can play. » read more

Accelerating Endpoint Inferencing


Chipmakers are getting ready to debut inference chips for endpoint devices, even though the rest of the machine-learning ecosystem has yet to be established. Whatever infrastructure does exist today is mostly in the cloud, on edge-computing gateways, or in company-specific data centers, which most companies continue to use. For example, Tesla has its own data center. So do most major carmake... » read more

Holes In AI Security


Mike Borza, principal security technologist in Synopsys’ Solutions Group, explains why security is lacking in AI, why AI is especially susceptible to Trojans, and why small changes in training data can have big impacts on many devices. » read more

Building An Efficient Inferencing Engine In A Car


David Fritz, who heads corporate strategic alliances at Mentor, a Siemens Business, talks about how to speed up inferencing by taking the input from sensors and quickly classifying the output, but also doing that with low power. » read more

← Older posts Newer posts →