One More Time: TOPS Do Not Predict Inference Throughput


Many times you’ll hear vendors talking about how many TOPS their chip has and imply that more TOPS means better inference performance. If you use TOPS to pick your AI inference chip, you will likely not be happy with what you get. Recently, Vivienne Sze, a professor at MIT, gave an excellent talk entitled “How to Evaluate Efficient Deep Neural Network Approaches.” Slides are also av... » read more

Optimizing What Exactly?


You can't optimize something without understanding it. While we inherently understand what this means, we are often too busy implementing something to stop and think about it. Some people may not even be sure what it is that they should be optimizing and that makes it very difficult to know if you have been successful. This was a key message delivered by Professor David Patterson at the Embedde... » read more

The Murky World Of AI Benchmarks


AI startup companies have been emerging at breakneck speed for the past few years, all the while touting TOPS benchmark data. But what does it really mean and does a TOPS number apply across every application? Answer: It depends on a variety of factors. Historically, every class of design has used some kind of standard benchmark for both product development and positioning. For example, SPEC... » read more

Power Becomes Bigger Concern For Embedded Processors


Power is emerging as the dominant concern for embedded processors even in applications where performance is billed as the top design criteria. This is happening regardless of the end application or the process node. In some high-performance applications, power density and thermal dissipation can limit how fast a processor can run. This is compounded by concerns about cyber and physical secur... » read more

Blog Review: Feb. 12


Complexity is growing by process node, by end application, and in each design. The latest crop of blogs points to just how many dependencies and uncertainties exist today, and what the entire supply chain is doing about them. Mentor's Shivani Joshi digs into various types of constraints in PCBs. Cadence's Neelabh Singh examines the complexities of verifying a lane adapter state machine in... » read more

Software In Inference Accelerators


Geoff Tate, CEO of Flex Logix, talks about the importance of hardware-software co-design for inference accelerators, how that affects performance and power, and what new approaches chipmakers are taking to bring AI chips to market. » read more

Tradeoffs In Embedded Vision SoCs


Gordon Cooper, product marketing manager for embedded vision processors at Synopsys, talks with Semiconductor Engineering about the need for more performance in these devices, how that impacts power, and what can be done to optimize both prior to manufacturing. » read more

Making Sense Of ML Metrics


Steve Roddy, vice president of products for Arm’s Machine Learning Group, talks with Semiconductor Engineering about what different metrics actually mean, and why they can vary by individual applications and use cases. » read more

TOPS, Memory, Throughput And Inference Efficiency


Dozens of companies have or are developing IP and chips for Neural Network Inference. Almost every AI company gives TOPS but little other information. What is TOPS? It means Trillions or Tera Operations per Second. It is primarily a measure of the maximum achievable throughput but not a measure of actual throughput. Most operations are MACs (multiply/accumulates), so TOPS = (number of MAC... » read more

Benchmarks For The Edge


Geoff Tate, CEO of Flex Logix, talks about benchmarking in edge devices, particularly for convolutional neural networks. https://youtu.be/-beVEpKAM4M » read more

← Older posts