Performance To The People

Sending most of the data to the cloud doesn’t work. Now what?

popularity

Ever since the IoT became a household term, the almost universal concept was that extremely low-power, simplistic devices would rule the edge. They would collect data, send it to the cloud, and the cloud would send back useful information.

That’s a great marketing concept for gateways and cloud services, but it’s not scalable. Consumers don’t just want to know when their heartbeat is irregular. They want to understand that irregularity in the context of many other factors such as stress, temperature, motion and medical history. And if they send all of that information to the cloud, along with billions of other people, then it will create the greatest data logjam in the history of electronics.

Think about the performance of popular shopping sites. While they typically don’t crash anymore, they do run more slowly around holidays. And that’s true even when searching with a high-speed connection using a powerful computer. Now extrapolate this out to trillions of devices collecting all sorts of data—some useful, most not—and suddenly the amount of data coursing through the Internet becomes astounding. The data from vision sensors alone could bring the Internet to its knees.

The problem isn’t that the Internet will go down (although it could, given that no one is directly responsible for it). The real issue is that I/O requires too much power and is too slow when massive amounts of data are involved. The better approach is to do more processing locally.

But how much more processing can be done locally depends on how well several engineering challenges are addressed.

First of all, solutions in one market may not apply in another. The IoT may require billions or trillions of devices, but economies of scale—where they exist—will be limited to individual markets, at least initially. A key component of this is understanding the best way to collect data in these markets, the most efficient way to process that data, and then building the chips to maximize performance and power specifically for that data. This is hardware-software co-design on a deeper level. It requires knowledge of different data types and some forethought into how that data ultimately will be parsed and processed, as well as the end goal is for that processing.

Second, and closely related, the hardware, firmware and software need to be not only extremely power-efficient, but increasingly self-sufficient. Many of these edge devices will run on batteries, so being able to parse functionality into appropriately-sized hardware is a first step. Adding in energy harvesting technology wherever possible will be essential. While battery improvements have not kept pace with the demand for processing performance, energy harvesting has been progressing using a spectrum of techniques ranging from motion to chemical differential and ambient radio waves and radiation. So far these have not been commercialized because no single approach provides enough energy, but over the next decade we’re likely to see a number of these techniques added into a single device. That will change the fundamental precepts of system design.

Third, as IoT markets begin showing more structure, platforms will begin to develop that can maximize performance and lower power for those markets. This sounds simple enough, but understanding underlying performance well enough to be able to cobble together customized solutions based upon specific market needs represents another fundamental shift in chip design. The concept of mass customization requires a deep understanding of market specifics, data flow, and how much of that is subject to change.

Taken as a whole, the goal is both lower power for some operations, higher power where necessary, and sufficient performance and functionality everywhere. To achieve that requires a rethinking of the basics of computer science, as well as the architectures that support it. This is no longer about shrinking a mainframe down to a smart phone, with a mainframe waiting to process excess data. It’s about moving full functionality closer to the data and having device architectures that can utilize that data faster, more efficiently and intelligently, and for much longer periods between battery charges.



Leave a Reply


(Note: This name will be displayed publicly)