Designing The Next Big Things

The edge is vast, vague, and highly specialized. That’s both good and bad.

popularity

The edge is a humongous opportunity for the semiconductor industry. The problem, despite its name, is that it’s not a single thing. It will be comprised of thousands of different chips and systems, and very few will be sold in large volumes.

The edge is the culmination of decades of improvement in power and performance, coupled with the architectural creativity that has exploded since the benefits of scaling began dropping off. That, in turn, has made it possible to develop chips with enough horsepower and memory to distribute processing and intelligence nearly everywhere, including devices that can run on a oin-sized battery. The problem is that now everyone wants that capability for specific applications, but not every application will support the higher development costs. So now the industry needs to figure out a way forward.

There are several main options on the table today. One is to resurrect the old “superchip” concept, which is tried and effective, although not necessarily the most efficient approach. The basic idea here is to build a chip that can serve multiple markets, then either use software to control which portions are activated, or blow fuses to securely disable certain parts. The upside is all of this can be designed up front. The downside is that distance matters, particularly in larger chips, and the extra time it takes for signals to travel across a chip can impact performance, power and thermal dissipation over thin wires. The key in this case is understanding how to minimize these effects through a well-architected floorplan.

A second approach is to use a tile-based strategy, such as chiplets. Intel, AMD and Marvell have developed modular architectures that allow them to customize their chips, and most of the foundries are developing schemes to allow them to quickly assemble devices out of pre-tested and characterized hardened IP in a similar way. The challenge with this approach has been the interconnect, as well as developing a marketplace of third-party tiles or chiplets to achieve economies of scale. So far, the early implementations all have been proprietary. There is work underway to standardize those interfaces, which could go a long way toward making semi-custom chips much more affordable.

A third approach is to pre-build platforms, similar to what the Raspberry Pi has done for simple IoT devices. The idea here is there are some commonly used underlying pieces that are common across many devices. But with edge systems, these will be significantly more complex. One size will not fit all. The big issue is being able to integrate multiple sensors and have localized AI/ML to screen out unnecessary data and process, or pre-process, only the useful data. That requires adding enough intelligence into these systems to not lose valuable data, and to be able to send as little data as possible along for further processing and storage.

All of these strategies have been proven to work. Some approaches will work better in certain circumstances than others, and there may be room for combining elements from each approach. But the bottom line is that the edge will ramp much more quickly than other new markets simply because much of this isn’t new, and low latency, privacy and security require a localized solution.

Who wins in this space is another matter. That may vary from market to market, and from one region to the next. But it also will depend on who can best leverage their R&D budgets at the system level. This is a system play, not a race to see who can develop the fastest chip, and while performance is essential in some applications, it is less important in others. So one size doesn’t fit all, but not everything needs to be different.



Leave a Reply


(Note: This name will be displayed publicly)