What’s Really Behind The Adoption Of eFPGA?


System companies are taking a more proactive role in co-designing their hardware and software roadmaps, so it’s no surprise that they are also driving the adoption of embedded FPGAs (eFPGA). But why and why has it taken so long? Today, most system companies leverage FPGAs to offload intensive compute workloads from the main processor or provide broader IO capability than any packaged ASIC ... » read more

eFPGAs Bring A 10X Advantage In Power And Cost


eFPGA LUTs will out-ship FPGA LUTs at some point in the near future because of the advantages of reconfigurable logic being built into the chip: cost reduction, lower power, and improved performance. Many systems use FPGAs because they are more efficient than processors for parallel processing and can be programmed with application specific co-processors or accelerators typically found in da... » read more

Add Security And Supply Chain Trust To Your ASIC Or SoC with eFPGAs


Before Covid-induced supply chain issues affected semiconductor availability and lead times, concerns about counterfeit parts and trusted supply chains were becoming the subject of many articles and discussions affecting critical data centers, communications, public infrastructure, and facilities such as regional power plants and the grid. Today’s semiconductor design and manufacturing is com... » read more

eFPGA Saved Us Millions of Dollars. It Can Do the Same for You


For those of you who follow Flex Logix, you already know that we have an IP business, EFLX eFGPA, and an edge inferencing co-processor chip and board business, InferX. InferX came about because we had many customers ask if they can run AI/ML algorithms in EFLX. The answer was and still is, of course you can – EFLX is an FPGA fabric similar to what FPGA chips use. Our co-founder, Cheng Wang, t... » read more

How Inferencing Differs From Training in Machine Learning Applications


Machine learning (ML)-based approaches to system development employ a fundamentally different style of programming than historically used in computer science. This approach uses example data to train a model to enable the machine to learn how to perform a task. ML training is highly iterative with each new piece of training data generating trillions of operations. The iterative nature of the tr... » read more

On-Chip FPGA: The “Other” Compute Resource


When system companies discuss processing requirements for their next generation products, the typical discussion invariably leads to: what should the processor subsystem look like? Do you upgrade the embedded processors in the current subsystem to the latest and greatest embedded CPU? Do you add more CPUs? Or perhaps add a little diversity by adding a DSP or GPU? One compute resource tha... » read more

Getting Better Edge Performance & Efficiency From Acceleration-Aware ML Model Design


The advent of machine learning techniques has benefited greatly from the use of acceleration technology such as GPUs, TPUs and FPGAs. Indeed, without the use of acceleration technology, it’s likely that machine learning would have remained in the province of academia and not had the impact that it is having in our world today. Clearly, machine learning has become an important tool for solving... » read more

Make Your SoC Upgradable Like A Tesla


I’ve always been a fan of Tesla. Not for the quick acceleration, nice lines, great handling or leading the world away from the using the internal combustion engine. I’m a big fan because they plan products not just for use today, but for the future. In the not too distant past, in order to get the latest automotive technology, you’d have to buy a new car. With Tesla, you don’t have to. ... » read more

How Dynamic Hardware Efficiently Solves The Neural Network Complexity Problem


Given the high computational requirements of neural network models, efficient execution is paramount. When performed trillions of times per second even the tiniest inefficiencies are multiplied into large inefficiencies at the chip and system level. Because AI models continue to expand in complexity and size as they are asked to become more human-like in their (artificial) intelligence, it is c... » read more

Integrate FPGAs For A Customizable MCU


MCUs come in a broad range of flavors, meaning you can pick the best one for the application with the right performance, feature set, peripherals, memory, and software programmability. So, then, why do many systems also use FPGAs next to the MCUs? Usually, it’s because there’s not a “perfect” MCU for their application. MCUs by definition are built to be generic for a wide variety of app... » read more

← Older posts Newer posts →