中文 English

ETH Zurich: PIM (Processing In Memory) Architecture, UPMEM & PrIM Benchmarks


New paper technical titled "Benchmarking a New Paradigm: An Experimental Analysis of a Real Processing-in-Memory Architecture" led by researchers at ETH Zurich. Researchers provide a comprehensive analysis of the first publicly-available real-world PIM architecture, UPMEM, and introduce PrIM (Processing-In-Memory benchmarks), a benchmark suite of 16 workloads from different application domai... » read more

Why Comparing Processors Is So Difficult


Every new processor claims to be the fastest, the cheapest, or the most power frugal, but how those claims are measured and the supporting information can range from very useful to irrelevant. The chip industry is struggling far more than in the past to provide informative metrics. Twenty years ago, it was relatively easy to measure processor performance. It was a combination of the rate at ... » read more

Is Programmable Overhead Worth The Cost?


Programmability has fueled the growth of most semiconductor products, but how much does it actually cost? And is that cost worth it? The answer is more complicated than a simple efficiency formula. It can vary by application, by maturity of technology in a particular market, and in the context of much larger systems. What's considered important for one design may be very different for anothe... » read more

How To Measure ML Model Accuracy


Machine learning (ML) is about making predictions about new data based on old data. The quality of any machine-learning algorithm is ultimately determined by the quality of those predictions. However, there is no one universal way to measure that quality across all ML applications, and that has broad implications for the value and usefulness of machine learning. “Every industry, every d... » read more

Customized Micro-Benchmarks For HW/SW Performance


Raw performance used to be the main focus of benchmarks, but they may have outlived their usefulness for many applications. Dana McCarty, vice president of sales and marketing for AI Inference Products at Flex Logix, talks about why companies need to develop and utilize their own specific models to accurately gauge hardware and software performance, which can be slowed by bottlenecks in I/O and... » read more

The Best AI Edge Inference Benchmark


When evaluating the performance of an AI accelerator, there’s a range of methodologies available to you. In this article, we’ll discuss some of the different ways to structure your benchmark research before moving forward with an evaluation that directly runs your own model. Just like when buying a car, research will only get you so far before you need to get behind the wheel and give your ... » read more

The Problem With Benchmarks


Benchmarks long have been used to compare products, but what makes a good benchmark and who should be trusted with their creation? The answer to those questions is more difficult than it may appear on the surface, and some benchmarks are being used in surprising ways. Everyone loves a simple, clear benchmark, but that is only possible when the selection criteria are equally simple. Unfortuna... » read more

Edge-Inference Architectures Proliferate


First part of two parts. The second part will dive into basic architectural characteristics. The last year has seen a vast array of announcements of new machine-learning (ML) architectures for edge inference. Unburdened by the need to support training, but tasked with low latency, the devices exhibit extremely varied approaches to ML inference. “Architecture is changing both in the comp... » read more

Standard Benchmarks For AI Innovation


There is no standard measurement for machine learning performance today, meaning there is no single answer for how companies build a processor for ML across all use cases while balancing compute and memory constraints. For the longest time, every group would pick a definition and test to suit their own needs. This lack of common understanding of performance hinders customers' buying decis... » read more

ResNet-50 Does Not Predict Inference Throughput For MegaPixel Neural Network Models


Customers are considering applications for AI inference and want to evaluate multiple inference accelerators. As we discussed last month, TOPS do NOT correlate with inference throughput and you should use real neural network models to benchmark accelerators. So is ResNet-50 a good benchmark for evaluating relative performance of inference accelerators? If your application is going to p... » read more

← Older posts