ResNet-50 Does Not Predict Inference Throughput For MegaPixel Neural Network Models


Customers are considering applications for AI inference and want to evaluate multiple inference accelerators. As we discussed last month, TOPS do NOT correlate with inference throughput and you should use real neural network models to benchmark accelerators. So is ResNet-50 a good benchmark for evaluating relative performance of inference accelerators? If your application is going to p... » read more

Power/Performance Bits: Oct. 27


Room-temp superconductivity Researchers at the University of Rochester, University of Nevada Las Vegas, and Intel created a material with superconducting properties at room temperature, the first time this has been observed. The researchers combined hydrogen with carbon and sulfur to photochemically synthesize simple organic-derived carbonaceous sulfur hydride in a diamond anvil cell, which... » read more

One More Time: TOPS Do Not Predict Inference Throughput


Many times you’ll hear vendors talking about how many TOPS their chip has and imply that more TOPS means better inference performance. If you use TOPS to pick your AI inference chip, you will likely not be happy with what you get. Recently, Vivienne Sze, a professor at MIT, gave an excellent talk entitled “How to Evaluate Efficient Deep Neural Network Approaches.” Slides are also av... » read more

Have Processor Counts Stalled?


Survey data suggests that additional microprocessor cores are not being added into SoCs, but you have to dig into the numbers to find out what is really going on. The reasons are complicated. They include everything from software programming models to market shifts and new use cases. So while the survey numbers appear to be flat, market and technology dynamics could have a big impact in resh... » read more

Neural Networks Without Matrix Math


The challenge of speeding up AI systems typically means adding more processing elements and pruning the algorithms, but those approaches aren't the only path forward. Almost all commercial machine learning applications depend on artificial neural networks, which are trained using large datasets with a back-propagation algorithm. The network first analyzes a training example, typically assign... » read more

AI Inference Acceleration


Geoff Tate, CEO of Flex Logix, talks about considerations in choosing an AI inference accelerator, how that fits in with other processing elements on a chip, what tradeoffs are involved with reducing latency, and what considerations are the most important. » read more

Compiling And Optimizing Neural Nets


Edge inference engines often run a slimmed-down real-time engine that interprets a neural-network model, invoking kernels as it goes. But higher performance can be achieved by pre-compiling the model and running it directly, with no interpretation — as long as the use case permits it. At compile time, optimizations are possible that wouldn’t be available if interpreting. By quantizing au... » read more

For AI Hardware, Power Optimization Starts With Software And Ends At Silicon


Artificial intelligence (AI) processing hardware has emerged as a critical piece of today’s tech innovation. AI hardware architecture is very symmetric with large arrays of up to thousands of processing elements (tiles), leading to billion+ gate designs and huge power consumption. For example, the Tesla auto-pilot software stack consumes 72W of power, while the neural network accelerator cons... » read more

Engineering Within Constraints


One of the themes of DAC this year was the next phase of machine learning. It is as if CNNs and RNNs officially have migrated from the research community and all that is left now is optimization. The academics need something new. Quite correctly, they have identified power as the biggest problem associated with learning and inferencing today, and a large part of that problem is associated with ... » read more

It’s Eternal Spring For AI


The field of Artificial Intelligence (AI) has had many ups and downs largely due to unrealistic expectations created by everyone involved including researchers, sponsors, developers, and even consumers. The “reemergence” of AI has lot to do with recent developments in supporting technologies and fields such as sensors, computing at macro and micro scales, communication networks and progre... » read more

← Older posts