Author's Latest Posts


Integrating 16nm FPGA Into 28/22nm SoC Without Losing Speed Or Flexibility


Systems companies like FPGA because it gives parallel processing performance that can outdo processors for many workloads and because it can be reconfigured when standards, algorithms, protocols or customer requirements change. But FPGAs are big, burn a lot of power and are expensive. Customers would like to integrate them into their adjacent SoC if possible. Dozens of customers are now u... » read more

Edge Inference Applications And Market Segmentation


Until recently, most AI was in data centers/cloud and most of that was training. Things are changing quickly. Projections are AI sales will grow rapidly to tens of billions of dollars by the mid 2020s, with most of the growth in edge AI inference. Data center/cloud vs. edge inference: What’s the difference? The data center/cloud is where inference started on Xeons. To gain efficiency, much ... » read more

ResNet-50 Does Not Predict Inference Throughput For MegaPixel Neural Network Models


Customers are considering applications for AI inference and want to evaluate multiple inference accelerators. As we discussed last month, TOPS do NOT correlate with inference throughput and you should use real neural network models to benchmark accelerators. So is ResNet-50 a good benchmark for evaluating relative performance of inference accelerators? If your application is going to p... » read more

One More Time: TOPS Do Not Predict Inference Throughput


Many times you’ll hear vendors talking about how many TOPS their chip has and imply that more TOPS means better inference performance. If you use TOPS to pick your AI inference chip, you will likely not be happy with what you get. Recently, Vivienne Sze, a professor at MIT, gave an excellent talk entitled “How to Evaluate Efficient Deep Neural Network Approaches.” Slides are also av... » read more

Apples, Oranges & The Optimal AI Inference Accelerator


There are a wide range of AI inference accelerators available and a wide range of applications for them. No AI inference accelerator will be optimal for every application. For example, a data center class accelerator almost certainly will be too big, burn too much power, and cost too much for most edge applications. And an accelerator optimal for key word recognition won’t have the capabil... » read more

Integrating FPGA: Comparison Of Chiplets Vs. eFPGA


FPGA is widely popular in systems for its flexibility and adaptability. Increasingly, it is being used in high volume applications. As volumes grow, system designers can consider integration of the FPGA into an SoC to reduce cost, reduce power and/or improve performance. There are two options for integrating FPGA into an SoC: FPGA chiplets, which replace the power hungry SERDES/PHYs wit... » read more

eFPGA As Fast And Dense As FPGA, On Any Process Node


A challenge for eFPGA when we started Flex Logix is that there are many customers and applications, and they all seemed to want eFPGA on different foundries, different nodes and different array sizes. And everyone wanted the eFPGA to be as fast and as dense as FPGA leaders’ on the same node. Oh, and customers seem to wait to the last minute then need the eFPGA ASAP. Xilinx and Altera (Intel ... » read more

Increasing eFPGA Adoption Will Shape eFPGA Features/Benefits


eFPGA adoption is accelerating. eFPGA is now available from multiple suppliers for multiple foundries and on nodes including 180nm, 40nm, 28nm, 22nm, 16nm, 12nm and 7nm. There are double-digit chips proven in silicon by multiple customers for multiple applications. And many more in fab, in design and in planning. The three main applications are: Integration of existing FPGA chips int... » read more

AI Inference: Pools Vs. Streams


Deep Learning and AI Inference originated in the data center and was first deployed in practical, volume applications in the data center. Only recently has Inference begun to spread to Edge applications (anywhere outside of the data center). In the data center much of the data to be processed is a “pool” of data. For example, when you see your photo album tagged with all of the pictures ... » read more

Software Is At Least As Important As Hardware For Inference Accelerators


In articles and conference presentations on Inference Accelerators, the focus is primarily on TOPS (frequency times number of MACs), a little bit on memory (DRAM interfaces and on chip SRAM), very little on interconnect (also very important, but that’s another story) and almost nothing on the software! Without software, the inference accelerator is a rock that does nothing. Software is wha... » read more

← Older posts Newer posts →