How to stay productive while driving—even with both hands on the wheel.
I write for a living, which means a lot of typing. But it can also mean a lot of talking, thanks to technology.
For six years now, I’ve happily used voice-to-text apps on my smart phone. These cloud-based services have made me immensely more productive, whether it’s dictating an email or a story idea. I once dictated an entire blog post into my phone while I was driving (don’t tell anyone!). I couldn’t imagine doing that when I started out as a cub reporter way back when, and Tandy TRS-80s (four lines of text!!) were state-of-the-art laptops.
Speech-to-text, now built into so many apps, leverages the wonder of cloud computing, shunting words to the big data center in the sky where it figures out what the heck I’m trying to say and then sending it on to my blogging platform or word processing software. In an instant.
It has its drawbacks. It doesn’t work well in areas with poor or no cell service. And even where there is cell service, the accuracy rates can be sub-optimal, depending on signal strength.
Smarts, distributed
This is a perfect example of the question of distributed intelligence—where to put the right compute power for a relevant task. As semiconductor technology gets more powerful and more power-efficient, the opportunity to take back certain computing duties from the cloud looms large. On edge devices, this can increase performance, responsiveness and improve data security as more compute is handled locally.
In my voice-to-text world, this means much more accurate text and faster rendering. For an artificial intelligence-powered device like Siri or Alexa, it means more privacy as your personal data is computed locally, rather than in the cloud. In the automotive world, it means additional security and safety, since an autonomous car must be able to think, decide and act, even if it loses connectivity.
Google’s Federated Learning is a great example why a distributed AI computing model is best. It uses machine learning to gather data on smartphone apps, using the information to improve the app experience. This could be done in the cloud, but by deploying ‘AI at the edge’ the updates happen immediately, and all private data remains on device. Later, the information is aggregated on Google’s servers anonymously and used to refine the app for everyone.
A first step on this journey to distributed intelligence occurred earlier this year when ARM unveiled its DynamIQ technology, a significant shift in multi-core microarchitecture that advances the “right processor for the right task” philosophy. It enables configurations of ARM big and LITTLE processors on a single compute cluster which were previously not possible. ARM CPG Vice President and General Manager Nandan Nayampally has written an overview of DynamIQ here.
Now, ARM has taken the distributed intelligence concept a step farther by announcing the first products based on DynamIQ: The ARM Cortex-A75 and Cortex-A55 processors. To further optimize SoCs for distributed intelligence, ARM also launched its latest GPU, Mali-G72. The new Mali-G72 graphics processor, based on the Bifrost architecture, is, among other things, designed for the new and demanding use cases of on-device machine learning. Nayampally has written on those announcements here.
As these types of advances make their way into SoCs and edge devices in the years ahead, don’t be surprised to see me alone, driving down some road in the Bay Area, chattering away to myself. I’ll be crafting another story about the next generation of technology thanks to the power of distributed intelligence.
Leave a Reply