Driving optimal performance with the Ethos-N77 NPU
On-device machine learning (ML) is a phenomenon that has exploded in popularity.
Smart devices that are able to make independent decisions, acting on locally generated data, are hailed as the future of compute for consumer devices: on-device processing slashes latency; increases reliability and safety; boosts privacy and security…all while saving on power and cost.
Although ML in edge devices has come a long way in a short time, with new networks, new algorithms, and different architectures arriving on the scene in recent years, the basic computational requirements of inference engines have remained constant. Since ML is a process of repetition and refinement, aimed at making sense of a vast amount of information and then drawing a conclusion, functional improvements are still largely driven by high throughput and high efficiency.
Repurposing a CPU, GPU or DSP to implement an inference engine can be an easy way to add ML capabilities to an edge device. To read more, click here.
An upbeat industry at the start of the year met one of its biggest challenges, but instead of being a headwind, it quickly turned into a tailwind.
The backbone of computing architecture for 75 years is being supplanted by more efficient, less general compute architectures.
Continuous design innovation adds to verification complexity, and pushes more companies to actually do it.
The semiconductor industry will look and behave differently this year, and not just because of the pandemic.
Experts at the Table: Any chip can be reverse-engineered, so what can be done to minimize the damage?
New data suggests that more chips are being forced to respin due to analog issues.
The backbone of computing architecture for 75 years is being supplanted by more efficient, less general compute architectures.
The number of options is increasing, but tooling and methodologies haven’t caught up.
Big investment in EV, batteries, and data center chips as 26 companies raise $2.6B.
Interconnects are becoming the limiter at advanced nodes.
Chips are hitting technical and economic obstacles, but that is barely slowing the rate of advancement in design size and complexity.
As implementations evolve to stay relevant, a new technology threatens to overtake SerDes.
Predicting the power or energy required to run an AI/ML algorithm is a complex task that requires accurate power models, none of which exist today.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
Leave a Reply