Author's Latest Posts


eFPGA Gives You FPGA Speed And Density At Much Less Cost And Power


FPGAs are everywhere in all types of systems for their flexibility and quick time to market. As your volumes grow and you consider an ASIC to cut cost and power, you can now incorporate an embedded FPGA to continue to give you flexibility for the parts of your chip that need to adapt for changing standards, improving algorithms and customer optimizations. If you are an SoC designer, you c... » read more

eFPGA Architectural Improvements That Lower Test Cost And Increase Quality


More than 40 chips have been licensed to use EFLX eFPGA and >20 chips are working in silicon. Big customers like Renesas are planning high volume families of chips using embedded FPGA. As a result, we have gained extensive experience and knowledge in almost 10 years of doing eFPGA especially in production test for cost reduction and reliability improvement. eFPGA DFT and MBIST for high q... » read more

Use Cases And Value Proposition Of eFPGA


Flex Logix EFLX eFPGA is the first eFPGA that enables a customer to match the performance of FPGAs from AMD/Xilinx and Intel (in the same process node) with the same density (LUTs/mm2). EFLX eFPGA has been in use with customers now for more than 5 years, hardware and software. More than 40 chips have been licensed to use EFLX eFPGA and more than 20 chips are working in silicon. Big customers... » read more

The Next Generation Of Embedded FPGA


EFLX eFPGA has been in use in SoCs for more than 5 years, hardware and software. More than 40 chips have been licensed to use EFLX eFPGA and more than 20 chips are working in silicon. Big customers like Renesas are planning high volumes and families of chips using eFPGA. As we have worked with customers our architecture has evolved from EFLX Gen 1.0 to Gen 2.0, 2.1, 2.2, 2.3 and now in 2023 ... » read more

Connect To Any Chip With Programmable GPIO


Your MCU/SoC today may have several options for GPIO connections: UART, SPI, I2C. But there are dozens of variations and kinds of GPIO interface protocols: you don’t have enough pins to provide all of them as hardwired options. As a result, a significant number of your customers either can’t use your chip because they need to connect to another with a GPIO interface you don’t support, ... » read more

The Importance Of Metal Stack Compatibility For Semi IP


Every foundry and every node is different, but for every foundry/node there are multiple supported metal stacks. Some chips use a lot more metal layers than others. A common rule of thumb is each metal layer increases wafer cost 10%. So, a chip with 5 more metal layers than another will cost 50%+ more. The most complex, high performance chips, including performance FPGAs, typically use AL... » read more

Micro FPGAs And Embedded FPGAs


When people hear “FPGA” they think “big, expensive, power hungry.”  But it doesn’t need to be that way. Renesas has announced their Forge FPGA family. Details are at their website and in one of the many articles that covered their press release. Forge FPGAs show that FPGAs don’t have to be big, power hungry, and expensive. Forge FPGAs are tiny, draw standby current measure... » read more

Integrating 16nm FPGA Into 28/22nm SoC Without Losing Speed Or Flexibility


Systems companies like FPGA because it gives parallel processing performance that can outdo processors for many workloads and because it can be reconfigured when standards, algorithms, protocols or customer requirements change. But FPGAs are big, burn a lot of power and are expensive. Customers would like to integrate them into their adjacent SoC if possible. Dozens of customers are now u... » read more

Edge Inference Applications And Market Segmentation


Until recently, most AI was in data centers/cloud and most of that was training. Things are changing quickly. Projections are AI sales will grow rapidly to tens of billions of dollars by the mid 2020s, with most of the growth in edge AI inference. Data center/cloud vs. edge inference: What’s the difference? The data center/cloud is where inference started on Xeons. To gain efficiency, much ... » read more

ResNet-50 Does Not Predict Inference Throughput For MegaPixel Neural Network Models


Customers are considering applications for AI inference and want to evaluate multiple inference accelerators. As we discussed last month, TOPS do NOT correlate with inference throughput and you should use real neural network models to benchmark accelerators. So is ResNet-50 a good benchmark for evaluating relative performance of inference accelerators? If your application is going to p... » read more

← Older posts