Accelerating Endpoint Inferencing


Chipmakers are getting ready to debut inference chips for endpoint devices, even though the rest of the machine-learning ecosystem has yet to be established. Whatever infrastructure does exist today is mostly in the cloud, on edge-computing gateways, or in company-specific data centers, which most companies continue to use. For example, Tesla has its own data center. So do most major carmake... » read more

Holes In AI Security


Mike Borza, principal security technologist in Synopsys’ Solutions Group, explains why security is lacking in AI, why AI is especially susceptible to Trojans, and why small changes in training data can have big impacts on many devices. » read more

Building An Efficient Inferencing Engine In A Car


David Fritz, who heads corporate strategic alliances at Mentor, a Siemens Business, talks about how to speed up inferencing by taking the input from sensors and quickly classifying the output, but also doing that with low power. » read more

Inferencing At The Edge


Geoff Tate, CEO of Flex Logix, talks about the challenges of power and performance at the edge, why this market is so important from a business and technology standpoint, and what factors need to be balanced. » read more

Neural Network Performance Modeling Software


nnMAX Inference IP is nearing design completion. The nnMAX 1K tile will be available this summer for design integration in SoCs, and it can be arrayed to provide whatever inference throughput is desired. The InferX X1 chip will tape out late Q3 this year using 2x2 nnMAX tiles, for 4K MACs, with 8MB SRAM. The nnMAX Compiler is in development in parallel, and the first release is available now... » read more

Making AI More Dependable


Ira Leventhal, vice president of Advantest’s new concept product initiative, looks at why AI has taken so long to get going, what role it will play in improving the reliability of all chips, and how to use AI to improve the reliability of AI chips themselves. » read more

Improving Edge Inferencing


Cheng Wang, senior vice president of engineering at Flex Logix, talks with Semiconductor Engineering about how to improve the efficiency and speed of edge inferencing chips, what causes bottlenecks, and why AI chips are different from other types of semiconductors. » read more

Week In Review: Design, Low Power


IP Flex Logix debuted its new InferX X1 edge inference co-processor, which incorporates the interconnect technology from its eFPGAs and its inference-optimized nnMAX clusters. The chip focuses on high throughput in edge applications with a single DRAM and is optimized for small batch sizes in edge applications where there is typically only one camera/sensor. InferX X1 will be available as chip... » read more

The Automation Of AI


Semiconductor Engineering sat down to discuss the role that EDA has in automating artificial intelligence and machine learning with Doug Letcher, president and CEO of Metrics; Daniel Hansson, CEO of Verifyter; Harry Foster, chief scientist verification for Mentor, a Siemens Business; Larry Melling, product management director for Cadence; Manish Pandey, Synopsys fellow; and Raik Brinkmann, CEO ... » read more

Use Inference Benchmarks Similar To Your Application


If an Inference IP supplier or Inference Accelerator Chip supplier offers a benchmark, it is probably ResNet-50. As a result, it might seem logical to use ResNet-50 to compare inference offerings. If you plan to use ResNet-50 it would be; but if your target application model is significantly different from Resnet-50 it could lead you to pick an inference offering that is not best for you. ... » read more

← Older posts Newer posts →