NPU Acceleration For Multimodal LLMs


Transformer-based models have rapidly spread from text to speech, vision, and other modalities. This has created challenges for the development of Neural Processing Units (NPUs). NPUs must now efficiently support the computation of weights and propagation of activations through a series of attention blocks. Increasingly, NPUs must be able to process models with multiple input modalities with ac... » read more

To (B)atch Or Not To (B)atch?


When evaluating benchmark results for AI/ML processing solutions, it is very helpful to remember Shakespeare’s Hamlet, and the famous line: “To be, or not to be.” Except in this case the “B” stands for Batched. Batch size matters There are two different ways in which a machine learning inference workload can be used in a system. A particular ML graph can be used one time, preced... » read more

Reducing SoC Power With NoCs And Caches


Today’s system-on-chip (SoC) designs face significant challenges with respect to managing and minimizing power consumption while maintaining high performance and scalability. Network-on-chip (NoC) interconnects coupled with innovative cache memories can address these competing requirements. Traditional NoCs SoCs consist of IP blocks that need to be connected. Early SoCs used bus-based archi... » read more

Can You Rely Upon Your NPU Vendor To Be Your Customers’ Data Science Team?


The biggest mistake a chip design team can make in evaluating AI acceleration options for a new SoC is to rely entirely upon spreadsheets of performance numbers from the NPU vendor without going through the exercise of porting one or more new machine learning networks themselves using the vendor toolsets. Why is this a huge red flag? Most NPU vendors tell prospective customers that (1) the v... » read more

Fantastical Creatures


In my day job I work in the High-Level Synthesis group at Siemens EDA, specifically focusing on algorithm acceleration. But on the weekends, sometimes, I take on the role of amateur cryptozoologist. As many of you know, the main Siemens EDA campus sits in the shadow of Mt. Hood and the Cascade Mountain range. This is prime habitat for Sasquatch, also known as “Bigfoot”. This weekend, ar... » read more

ConvNext Runs 28X Faster Than Fallback


Two months ago in our blog we highlighted the fallacy of using a conventional NPU accelerator paired with a DSP or CPU for “fallback” operations. (Fallback Fails Spectacularly, May 2024). In that blog we calculated what the expected performance would be for a system with a DSP needing to perform the new operations found in one of today’s leading new ML networks – ConvNext. The result wa... » read more

Intel Vs. Samsung Vs. TSMC


The three leading-edge foundries — Intel, Samsung, and TSMC — have started filling in some key pieces in their roadmaps, adding aggressive delivery dates for future generations of chip technology and setting the stage for significant improvements in performance with faster delivery time for custom designs. Unlike in the past, when a single industry roadmap dictated how to get to the next... » read more

How To Successfully Deploy GenAI On Edge Devices


Generative AI (GenAI) burst onto the scene and into the public’s imagination with the launch of ChatGPT in late 2022. Users were amazed at the natural language processing chatbot’s ability to turn a short text prompt into coherent humanlike text including essays, language translations, and code examples. Technology companies – impressed with ChatGPT’s abilities – have started looking ... » read more

Fallback Fails Spectacularly


Conventional AI/ML inference silicon designs employ a dedicated, hardwired matrix engine – typically called an “NPU” – paired with a legacy programmable processor – either a CPU, or DSP, or GPU. The common theory behind these two-core (or even three core) architectures is that most of the matrix-heavy machine learning workload runs on the dedicated accelerator for maximum efficienc... » read more

Embrace The New!


The ResNet family of machine learning algorithms was introduced to the AI world in 2015. A slew of variations was rapidly discovered that at the time pushed the accuracy of ResNets close to the 80% threshold (78.57% Top 1 accuracy for ResNet-152 on ImageNet). This state-of-the-art performance at the time, coupled with the rather simple operator structure that was readily amenable to hardware ac... » read more

← Older posts