Embrace The New!


The ResNet family of machine learning algorithms was introduced to the AI world in 2015. A slew of variations was rapidly discovered that at the time pushed the accuracy of ResNets close to the 80% threshold (78.57% Top 1 accuracy for ResNet-152 on ImageNet). This state-of-the-art performance at the time, coupled with the rather simple operator structure that was readily amenable to hardware ac... » read more

BYO NPU Benchmarks


In our last blog post, we highlighted the ways that NPU vendors can shade the truth about performance on benchmark networks such that comparing common performance scores such as “Resnet50 Inferences / Second” can be a futile exercise. But there is a straight-forward, low-investment method for an IP evaluator to short-circuit all the vendor shenanigans and get a solid apples-to-apples result... » read more

Neural Network Model Quantization On Mobile


The general definition of quantization states that it is the process of mapping continuous infinite values to a smaller set of discrete finite values. In this blog, we will talk about quantization in the context of neural network (NN) models, as the process of reducing the precision of the weights, biases, and activations. Moving from floating-point representations to low-precision fixed intege... » read more

Object Detection CNN Suitable For Edge Processors With Limited Memory


A technical paper titled “TinyissimoYOLO: A Quantized, Low-Memory Footprint, TinyML Object Detection Network for Low Power Microcontrollers” was published by researchers at ETH Zurich. Abstract: "This paper introduces a highly flexible, quantized, memory-efficient, and ultra-lightweight object detection network, called TinyissimoYOLO. It aims to enable object detection on microcontrol... » read more

Training a ML model On An Intelligent Edge Device Using Less Than 256KB Memory


A new technical paper titled "On-Device Training Under 256KB Memory" was published by researchers at MIT and MIT-IBM Watson AI Lab. “Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a bro... » read more

Convolutional Neural Networks: Co-Design of Hardware Architecture and Compression Algorithm


Researchers at Soongsil University (Korea) published "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Abstract: "Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction... » read more

Transforming AI Models For Accelerator Chips


AI is all about speeding up the movement and processing of data. Ali Cheraghi, solution architect at Flex Logix, talks about why floating point data needs to be converted into integer point data, how that impacts power and performance, and how different approaches in quantization play into this formula. » read more

Customizable FPGA-Based Hardware Accelerator for Standard Convolution Processes Empowered with Quantization Applied to LiDAR Data


Abstract "In recent years there has been an increase in the number of research and developments in deep learning solutions for object detection applied to driverless vehicles. This application benefited from the growing trend felt in innovative perception solutions, such as LiDAR sensors. Currently, this is the preferred device to accomplish those tasks in autonomous vehicles. There is a bro... » read more

Getting Better Edge Performance & Efficiency From Acceleration-Aware ML Model Design


The advent of machine learning techniques has benefited greatly from the use of acceleration technology such as GPUs, TPUs and FPGAs. Indeed, without the use of acceleration technology, it’s likely that machine learning would have remained in the province of academia and not had the impact that it is having in our world today. Clearly, machine learning has become an important tool for solving... » read more

Impact Of GAA Transistors At 3/2nm


The chip industry is poised for another change in transistor structure as gate-all-around (GAA) FETs replace finFETs at 3nm and below, creating a new set of challenges for design teams that will need to be fully understood and addressed. GAA FETs are considered an evolutionary step from finFETs, but the impact on design flows and tools is still expected to be significant. GAA FETs will offer... » read more

← Older posts