Author's Latest Posts


Cache Speculation Side-Channels


This whitepaper looks at the susceptibility of Arm implementations following research findings from security researchers, including Google and MIT, on new potential cache timing side-channels exploiting processor speculation. This paper also outlines possible mitigations that can be employed for software designed to run on existing Arm processors. To read more, click here. » read more

The New Voice Of The Embedded Intelligent Assistant


As intelligent assistance is becoming vital in our daily lives, the technology is taking a big leap forward. Recognition Technologies & Arm have published a white paper that provides technical insight into the architecture and design approach that’s making the gateway a more powerful, efficient place for voice recognition. Some topics covered include: Why knowing who is speaking is i... » read more

Mobile Machine Learning Hardware At Arm


Machine learning is playing an increasingly significant role in emerging mobile application domains such as AR/VR, ADAS, etc. Accordingly, hardware architects have designed customized hardware for machine learning algorithms, especially neural networks, to improve compute efficiency. However, machine learning is typically just one processing stage in complex end-to-end applications, which invol... » read more

Cache Speculation Side-Channels


Cache timing side-channels are a well understood concept in the area of security research. As such, this whitepaper will provide a simple conceptual overview rather than an in-depth explanation. The basic principle behind cache timing side-channels is that the pattern of allocations into the cache, and, in particular, which cache sets have been used for the allocation, can be determined by m... » read more

Optimizing Machine Learning Workloads On Power-Efficient Devices


Software frameworks for neural networks, such as TensorFlow, PyTorch, and Caffe, have made it easier to use machine learning as an everyday feature, but it can be difficult to run these frameworks in an embedded environment. Limited budgets for power, memory, and computation can all make this more difficult. At Arm, we’ve developed Arm NN, an inference engine that makes it easier to target di... » read more

Packing Neural Networks Into End-User Client Devices


Most of today’s neural networks can only run on high-performance servers. There’s a big push to change this and simplify network processing to the point where the algorithms can run on end-user client devices. One approach is to eliminate complexity by replacing floating-point representation with fixed-point representation. We take a different approach, and recommend a mix of the two, so as... » read more

Mobile Machine Learning At Arm


Machine learning is playing an increasingly significant role in emerging mobile application domains such as AR/VR, ADAS, etc. Accordingly, hardware architects have designed customized hardware for machine learning algorithms, especially neural networks, to improve compute efficiency. However, machine learning is typically just one processing stage in complex end-to-end applications, which invol... » read more

The Power Of Speech


With the widespread use of voice-activated virtual assistants, such as Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and the Google Assistant, voice has become an everyday way to interact with electronics. We’re talking to our devices more than ever, using speech to initiate searches, issue commands, and even make purchases. There are a number of reasons why using your voice to ... » read more

Not All Ops Are Created Equal


Efficient and compact neural network models are essential for enabling the deployment on mobile and embedded devices. In this work, we point out that typical design metrics for gauging the efficiency of neural network architectures – total number of operations and parameters – are not sufficient. These metrics may not accurately correlate with the actual deployment metrics such as energy an... » read more

The Route To A Trillion Devices


Technology vendors like to talk about data being big, really big. Petabytes of storage; gigabits of bandwidth; megaflops of processing power. But data doesn’t have to be big to be valuable. One of the most successful financial trades of all time was premised on a piece of information that could have been represented by a single bit (1 or 0). On June 19 1815, the bond market in London wa... » read more

← Older posts Newer posts →