Enhancing Compute Security Architecture For New-Age Applications


New-age AI-powered applications are becoming increasingly essential in our daily lives. Continuing to do so requires that these applications and services meet three primary challenges: Achieving high performance for complex compute tasks. Ensuring cost-effectiveness and seamless integration with existing infrastructure. Maintaining robust security and privacy measures. Historicall... » read more

Chiplet-Based NPUs to Accelerate Vehicular AI Perception Workloads


A new technical paper titled "Performance Implications of Multi-Chiplet Neural Processing Units on Autonomous Driving Perception" was published by researchers at UC Irvine. Abstract "We study the application of emerging chiplet-based Neural Processing Units to accelerate vehicular AI perception workloads in constrained automotive settings. The motivation stems from how chiplets technology i... » read more

Can You Rely Upon Your NPU Vendor To Be Your Customers’ Data Science Team?


The biggest mistake a chip design team can make in evaluating AI acceleration options for a new SoC is to rely entirely upon spreadsheets of performance numbers from the NPU vendor without going through the exercise of porting one or more new machine learning networks themselves using the vendor toolsets. Why is this a huge red flag? Most NPU vendors tell prospective customers that (1) the v... » read more

A HW-Aware Scalable Exact-Attention Execution Mechanism For GPUs (Microsoft)


A technical paper titled “Lean Attention: Hardware-Aware Scalable Attention Mechanism for the Decode-Phase of Transformers” was published by researchers at Microsoft. Abstract: "Transformer-based models have emerged as one of the most widely used architectures for natural language processing, natural language generation, and image generation. The size of the state-of-the-art models has in... » read more

High-Level Synthesis Propels Next-Gen AI Accelerators


Everything around you is getting smarter. Artificial intelligence is not just a data center application but will be deployed in all kinds of embedded systems that we interact with daily. We expect to talk to and gesture at them. We expect them to recognize and understand us. And we expect them to operate with just a little bit of common sense. This intelligence is making these systems not just ... » read more

Fallback Fails Spectacularly


Conventional AI/ML inference silicon designs employ a dedicated, hardwired matrix engine – typically called an “NPU” – paired with a legacy programmable processor – either a CPU, or DSP, or GPU. The common theory behind these two-core (or even three core) architectures is that most of the matrix-heavy machine learning workload runs on the dedicated accelerator for maximum efficienc... » read more

Research Bits: April 30


Sound waves in optical neural networks Researchers from the Max Planck Institute for the Science of Light and Massachusetts Institute of Technology found a way to build reconfigurable recurrent operators based on sound waves for photonic machine learning. They used light to create temporary acoustic waves in an optical fiber, which manipulate subsequent computational steps of an optical rec... » read more

Sea Of Processors Use Case


Core counts have been increasing steadily since IBM's debut of the Power 4 in 2001, eclipsing 100 CPU cores and over 1,000 for AI accelerators. While sea of processor architectures feature a stamp and repeat design, per-core workloads aren't always going to be symmetrically balanced. For example, a cloud provider (AI or compute) will rent out individual core clusters to customers for specialize... » read more

A Hypermultiplexed Integrated Tensor Optical Processor (USC, MIT et al.)


A technical paper titled “Hypermultiplexed Integrated Tensor Optical Processor” was published by researchers at the University of Southern California, Massachusetts Institute of Technology (MIT), City University of Hong Kong, and NTT Research. Abstract: "The escalating data volume and complexity resulting from the rapid expansion of artificial intelligence (AI), internet of things (IoT) a... » read more

AI Tradeoffs At The Edge


AI is impacting almost every application area imaginable, but increasingly it is moving from the data center to the edge, where larger amounts of data need to be processed much more quickly than in the past. This has set off a scramble for massive improvements in performance much closer to the source of data, but with a familiar set of caveats — it must use very little power, be affordable... » read more

← Older posts