Memory Subsystems In Edge Inferencing Chips


Geoff Tate, CEO of Flex Logix, talks about key issues in a memory subsystem in an inferencing chip, how factors like heat can affect performance, and where these kinds of chips will be used. » read more

eFPGA Macros Deliver Higher Speeds from Less Area/Resources


We work with a lot of customers designing eFPGA into their SoCs.  Most of them have “random logic” RTL, but some customers have large numbers of complex, frequently used blocks. We have found in many cases that we can help the customer achieve higher throughput AND use less silicon area with Soft Macros. Let’s look at an example: 64x64 Multiply-Accumulate (MAC), below: If yo... » read more

How To Improve ML Power/Performance


Raymond Nijssen, vice president and chief technologist at Achronix, talks about the shift from brute-force performance to more power efficiency in machine learning processing, the new focus on enough memory bandwidth to keep MAC functions busy, and how dynamic range, precision and locality can be modified to improve speed and reduce power. » read more

Week in Review: IoT, Security, Auto


Products/Services Arteris IP reports that Bitmain licensed the Arteris Ncore Cache Coherent Interconnect intellectual property for use in its next-generation Sophon Tensor Processing Unit system-on-a-chip devices for the scalable hardware acceleration of artificial intelligence and machine learning algorithms. “Our choice of interconnect IP became more important as we continued to increase t... » read more

Using Analog For AI


If the only tool you have is a hammer, everything looks like a nail. But development of artificial intelligence (AI) applications and the compute platforms for them may be overlooking an alternative technology—analog. The semiconductor industry has a firm understanding of digital electronics and has been very successful making it scale. It is predictable, has good yield, and while every de... » read more

In-Memory Vs. Near-Memory Computing


New memory-centric chip technologies are emerging that promise to solve the bandwidth bottleneck issues in today’s systems. The idea behind these technologies is to bring the memory closer to the processing tasks to speed up the system. This concept isn’t new and the previous versions of the technology fell short. Moreover, it’s unclear if the new approaches will live up to their billi... » read more

Lies, Damn Lies, And TOPS/Watt


There are almost a dozen vendors promoting inferencing IP, but none of them gives even a ResNet-50 benchmark. The only information they state typically is TOPS (Tera-Operations/Second) and TOPS/Watt. These two indicators of performance and power efficiency are almost useless by themselves. So what, exactly, does X TOPS really tell you about performance for your application? When a vendor ... » read more

Inferencing In Hardware


Cheng Wang, senior vice president of engineering at Flex Logix, examines shifting neural network models, how many multiply-accumulates are needed for different applications, and why programmable neural inferencing will be required for years to come. https://youtu.be/jb7qYU2nhoo         See other tech talk videos here. » read more

AI, ML Chip Choices


Flex Logix’s Cheng Wang talks about which types of chips work best for neural networks, AI and machine learning. https://youtu.be/k7OdP7B10o8 » read more

IoT Wireless Battles Ahead


"The good thing about standards is that there are so many to choose from." – Andrew S. Tanebaum The extended version of that quote adds "furthermore, if you do not like any of them, you can just wait for next year's model." That could not be truer when it comes to IoT and wireless connectivity. Every standards group is rushing to create new versions of existing standards that use less p... » read more

← Older posts