Powering The Edge

Driving optimal performance with the Ethos-N77 NPU


On-device machine learning (ML) is a phenomenon that has exploded in popularity.

Smart devices that are able to make independent decisions, acting on locally generated data, are hailed as the future of compute for consumer devices: on-device processing slashes latency; increases reliability and safety; boosts privacy and security…all while saving on power and cost.

Although ML in edge devices has come a long way in a short time, with new networks, new algorithms, and different architectures arriving on the scene in recent years, the basic computational requirements of inference engines have remained constant. Since ML is a process of repetition and refinement, aimed at making sense of a vast amount of information and then drawing a conclusion, functional improvements are still largely driven by high throughput and high efficiency.

Repurposing a CPU, GPU or DSP to implement an inference engine can be an easy way to add ML capabilities to an edge device. To read more, click here.

Leave a Reply

(Note: This name will be displayed publicly)