Vision Is Why LLMs Matter On The Edge


Large Language Models (LLMs) have taken the world by storm since the 2017 Transformers paper, but pushing them to the edge has proved problematic. Just this year, Google had to revise its plans to roll out Gemini Nano on all new Pixel models — the down-spec’d hardware options proved unable to host the model as part of a positive user experience. But the implementation of language-focused mo... » read more

Considerations For Accelerating On-Device Stable Diffusion Models


One of the more powerful – and visually stunning – advances in generative AI has been the development of Stable Diffusion models. These models are used for image generation, image denoising, inpainting (reconstructing missing regions in an image), outpainting (generating new pixels that seamlessly extend an image's existing bounds), and bit diffusion. Stable Diffusion uses a type of dif... » read more

Unlocking The Power Of Edge Computing With Large Language Models


In recent years, Large Language Models (LLMs) have revolutionized the field of artificial intelligence, transforming how we interact with devices and the possibilities of what machines can achieve. These models have demonstrated remarkable natural language understanding and generation abilities, making them indispensable for various applications. However, LLMs are incredibly resource-intensi... » read more

Generative AI: Transforming Inference At The Edge


The world is witnessing a revolutionary advancement in artificial intelligence with the emergence of generative AI. Generative AI generates text, images, or other media responding to prompts. We are in the early stages of this new technology; still, the depth and accuracy of its results are impressive, and its potential is mind-blowing. Generative AI uses transformers, a class of neural network... » read more

A Packet-Based Architecture For Edge AI Inference


Despite significant improvements in throughput, edge AI accelerators (Neural Processing Units, or NPUs) are still often underutilized. Inefficient management of weights and activations leads to fewer available cores utilized for multiply-accumulate (MAC) operations. Edge AI applications frequently need to run on small, low-power devices, limiting the area and power allocated for memory and comp... » read more

A Buyers Guide To An NPU


Choosing the right AI inference NPU (Neural Processing Unit) is a critical decision for a chip architect. There’s a lot at stake because as the AI landscape constantly changes, the choices will impact overall product cost, performance, and long-term viability. There are myriad options regarding system architecture and IP suppliers, and this can be daunting for even the most seasoned semicondu... » read more

An Ideal Always-Sensing Subsystem Architecture


Always-sensing cameras are a relatively new method for users to interact with their smartphones, home appliances, and other consumer devices. Like always-listening audio-based Siri and Alexa, always-sensing cameras enable a seamless, more natural user experience. Through continuous sampling and analyzing visual data, always-sensing enables use cases such as: “Find a face” detection for... » read more

Can Compute-In-Memory Bring New Benefits To Artificial Intelligence Inference?


Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. However, for that to be successful, an AI processing system would need to be explicitly architected to use CIM. The change would entail a shift ... » read more

Looking Beyond TOPS/W: How To Really Compare NPU Performance


There is a lot more to understanding the true capabilities of an AI engine beyond TOPS per watt. A rather arbitrary measure of the number of operations of an engine per unit of power, the TOPS/W metric completely misses the point that a single operation on one engine may accomplish more useful work than a multitude of operations on another engine. In any case, TOPS/W is by no means the only spe... » read more