The rollout of new devices will require both high performance and low power, and that’s not simple.
As companies begin exploring what will be necessary to win at the edge, they are coming up with some daunting challenges.
Designing chips for the edge is far different than for the IoT/IIoT. The idea with the IoT was that simple sensors would relay data through a gateway to the cloud, where it would be processed and data could be sent back to the device as needed. That works if it’s a small amount of data, such as a temperature reading or a smoke alert, and where latency is not an issue. But it doesn’t work where it involves images or streaming video, or where a millisecond can mean the difference between life and death.
Sending everything back and forth to the cloud is time-consuming, energy-inefficient, and it unnecessarily adds concerns about security and privacy. A better approach is to process data locally and only send to the cloud what the user is willing to let someone else see.
There are three main challenges here, and lots of smaller ones. First, in many cases the initial data download needs to be much more limited than in the past. Just importing all data for processing, particularly with an image or video, requires much more processing than is necessary. That puts a burden on resources such as memory and I/O.
The alternative is to import less in the first place, but that requires some intelligence at the sensor level. In effect, to reduce the amount of processing further downstream, more intelligence has to be added closer to the source of the data. Typically it isn’t essential to include all data, but it is essential to understand what’s being left behind and why, and this is emerging as one of the humongous challenges for AI/ML/DL. Doing that in the footprint of a sensor requires some pretty advanced electronics.
This leads to the second challenge. Each device needs to be customized, or at least semi-customized for the job it needs to do. If it’s a car sensor identifying objects in the road, the driver or vehicle has to know whether those objects are fixed or moving, and that information needs to be captured with consistent accuracy. The only way to do that is in context. While interpreting movement at the sensor level isn’t the same as figuring out whether it’s a dog or a person or a stalled car, it does require very fast processing using low power in a highly customized design. That information needs to be correlated with other information collected and processed in other parts of a vehicle to determine whether there’s a problem on the road ahead, or whether another sensor has been blinded by dirt or a rock and isn’t picking up that signal.
It’s not enough just to put in the most powerful processors. These sensors need to be designed for highly-specific tasks, and they may only do one or two things well. But they need to do those things extremely fast, accurately enough for the needs of the system, and they need to be reliable over time. Moreover, they need to do this using minimal amounts of power, which means not only does the chip have to be customized, but the job the chip is supposed to handle has to be tuned to the chip.
Finally, these devices need to be cost-competitive with less-advanced devices. Whether cost can be amortized at the system level, or whether it comes down to price squeezing at the chip level, isn’t known yet. However, basic economics dictate that no one will pay for a fully custom design that costs tens of millions of dollars if it’s going to be sold in limited quantities in low-cost devices.
These are tough parameters to work in. The opening up of the edge is a potential bonanza, but it will require some major shifts in how chips are designed, how partitioning occurs at the system level, and how much can be pre-developed without customization. New use cases will drive different economies of scale at the edge, but how power and performance get prioritized, updated and customized in these complex systems remains to be seen.
Leave a Reply