Bolstering Security For AI Applications


Hardware accelerators that run sophisticated artificial intelligence (AI) and machine learning (ML) algorithms have become increasingly prevalent in data centers and endpoint devices. As such, protecting sensitive and lucrative data running on AI hardware from a range of threats is now a priority for many companies. Indeed, a determined attacker can either manipulate or steal training data, inf... » read more

Using Multiple Inferencing Chips In Neural Networks


Geoff Tate, CEO of Flex Logix, talks about what happens when you add multiple chips in a neural network, what a neural network model looks like, and what happens when it’s designed correctly vs. incorrectly. » read more

Why Data Is So Difficult To Protect In AI Chips


Experts at the Table: Semiconductor Engineering sat down to discuss a wide range of hardware security issues and possible solutions with Norman Chang, chief technologist for the Semiconductor Business Unit at ANSYS; Helena Handschuh, fellow at Rambus, and Mike Borza, principal security technologist at Synopsys. What follows are excerpts of that conversation. The first part of this discussion ca... » read more

Memory Subsystems In Edge Inferencing Chips


Geoff Tate, CEO of Flex Logix, talks about key issues in a memory subsystem in an inferencing chip, how factors like heat can affect performance, and where these kinds of chips will be used. » read more

Making Better Use Of Memory In AI


Steven Woo, Rambus fellow and distinguished inventor, talks about using number formats to extend memory bandwidth, what the impact can be on fractional precision, how modifications of precision can play into that without sacrificing accuracy, and what role stochastic rounding can play. » read more

Accelerating Endpoint Inferencing


Chipmakers are getting ready to debut inference chips for endpoint devices, even though the rest of the machine-learning ecosystem has yet to be established. Whatever infrastructure does exist today is mostly in the cloud, on edge-computing gateways, or in company-specific data centers, which most companies continue to use. For example, Tesla has its own data center. So do most major carmake... » read more

Holes In AI Security


Mike Borza, principal security technologist in Synopsys’ Solutions Group, explains why security is lacking in AI, why AI is especially susceptible to Trojans, and why small changes in training data can have big impacts on many devices. » read more

Building An Efficient Inferencing Engine In A Car


David Fritz, who heads corporate strategic alliances at Mentor, a Siemens Business, talks about how to speed up inferencing by taking the input from sensors and quickly classifying the output, but also doing that with low power. » read more

Inferencing At The Edge


Geoff Tate, CEO of Flex Logix, talks about the challenges of power and performance at the edge, why this market is so important from a business and technology standpoint, and what factors need to be balanced. » read more

Neural Network Performance Modeling Software


nnMAX Inference IP is nearing design completion. The nnMAX 1K tile will be available this summer for design integration in SoCs, and it can be arrayed to provide whatever inference throughput is desired. The InferX X1 chip will tape out late Q3 this year using 2x2 nnMAX tiles, for 4K MACs, with 8MB SRAM. The nnMAX Compiler is in development in parallel, and the first release is available now... » read more

← Older posts Newer posts →