Machine Learning On Arm Cortex-M Microcontrollers


Machine learning (ML) algorithms are moving to the IoT edge due to various considerations such as latency, power consumption, cost, network bandwidth, reliability, privacy and security. Hence, there is an increasing interest in developing Neural Network (NN) solutions to deploy them on low-power edge devices such as the Arm Cortex-M microcontroller systems. CMSIS-NN is an open-source library of... » read more

Neural Network Performance Modeling Software


nnMAX Inference IP is nearing design completion. The nnMAX 1K tile will be available this summer for design integration in SoCs, and it can be arrayed to provide whatever inference throughput is desired. The InferX X1 chip will tape out late Q3 this year using 2x2 nnMAX tiles, for 4K MACs, with 8MB SRAM. The nnMAX Compiler is in development in parallel, and the first release is available now... » read more

Blog Review: April 3


Synopsys' Taylor Armerding contends that as the IoT becomes more ubiquitous, the threat of cyber-physical attacks is rising, with the potential for a domino effect if even simple devices are compromised in large enough quantities. Mentor's Colin Walls considers the move away from programming on bare metal with the rise of drivers and RTOSes and when it makes sense to still use the old method... » read more

The Automation Of AI


Semiconductor Engineering sat down to discuss the role that EDA has in automating artificial intelligence and machine learning with Doug Letcher, president and CEO of Metrics; Daniel Hansson, CEO of Verifyter; Harry Foster, chief scientist verification for Mentor, a Siemens Business; Larry Melling, product management director for Cadence; Manish Pandey, Synopsys fellow; and Raik Brinkmann, CEO ... » read more

Week In Review: Design, Low Power


Synopsys announced several new products: a new test family, a physical verification solution, and a software library for neural net SoCs. TestMAX, the new family of test products, includes soft error analysis and X-tolerant logic BIST for automotive test and functional safety requirements. TestMAX enables test through functional high-speed interfaces and supports early validation of DFT logi... » read more

Inference Acceleration: Follow The Memory


Much has been written about the computational complexity of inference acceleration: very large matrix multiplies for fully-connected layers and huge numbers of 3x3 convolutions across megapixel images, both of which require many thousands of MACs (multiplier-accumulators) to achieve high throughput for models like ResNet-50 and YOLOv3. The other side of the coin is managing the movement of d... » read more

In-Memory Vs. Near-Memory Computing


New memory-centric chip technologies are emerging that promise to solve the bandwidth bottleneck issues in today’s systems. The idea behind these technologies is to bring the memory closer to the processing tasks to speed up the system. This concept isn’t new and the previous versions of the technology fell short. Moreover, it’s unclear if the new approaches will live up to their billi... » read more

Use Inference Benchmarks Similar To Your Application


If an Inference IP supplier or Inference Accelerator Chip supplier offers a benchmark, it is probably ResNet-50. As a result, it might seem logical to use ResNet-50 to compare inference offerings. If you plan to use ResNet-50 it would be; but if your target application model is significantly different from Resnet-50 it could lead you to pick an inference offering that is not best for you. ... » read more

In-Memory Computing Challenges Come Into Focus


For the last several decades, gains in computing performance have come by processing larger volumes of data more quickly and with superior precision. Memory and storage space are measured in gigabytes and terabytes now, not kilobytes and megabytes. Processors operate on 64-bit rather than 8-bit chunks of data. And yet the semiconductor industry’s ability to create and collect high quality ... » read more

Power/Performance Bits: Jan. 29


Neural nets struggle with shape Cognitive psychologists at the University of California Los Angeles investigated how deep convolutional neural networks identify objects and found a big difference between the way these networks and humans perceive objects. In the first of a series of experiments, the researchers showed color images of animals and objects that had been altered to have a diffe... » read more

← Older posts