Home
OPINION

More Knobs, Fewer Markers

There are plenty of opportunities for improved performance at lower power, but not the usual ones.

popularity

The next big thing in chip design may be really big — the price tag.

In the past, when things got smaller, so did the cost per transistor. Now they are getting more expensive to design and manufacture, and the cost per transistor is going up along with the number of transistors per area of die, and in many cases even the size of the die. That’s not exactly a winning economic formula, which is why there is so much research, and so much money pouring in from investors.

There are several complicating and converging factors, aside from the slowdown in Moore’s Law scaling. The first is that more sensors of all types, whether they are cameras or inertial sensors, are generating much more data than in the past. In fact, there is so much data that it needs to be processed in place because sending it anywhere is far too resource-intensive. So it needs to be cleaned and structured before sending it on, and the rest needs to be trashed forever.

Second, entirely new layers of processing and storage need to be developed, because the response time of sending everything to the cloud and back is way too slow. This is generally referred to as the edge, but there is no clear vision for how this will shape up in coming months and years. The challenge is that requirements for each application and market slice are very different, which means many of these solutions need to be customized, or at least semi-customized.

A third issue is that all of this is changing so quickly that by the time designs are finished, many of them already are obsolete. In the past, most chipmakers had successive waves of design based on derivative chips. But there are so many fundamental changes across markets and in designs that it’s much more difficult to leverage one design into the next generation.

This isn’t a cause for panic. There are plenty of solutions to these problems, and lots of ways to lower power and improve performance. The big question is which ones to choose, because at this point there is little consensus about what are the best options. Unlike in the past, when the entire semiconductor industry agreed on a roadmap, today’s roadmaps are of more limited value. The industry has been splintering into markets and sub-markets for several years, and there is no indication that trend will change.

From a hardware flexibility standpoint, one approach that seems to be gaining momentum involves chiplets. The idea has been on the drawing board for years, and nearly everyone agrees that a 5nm process is not required for everything. Heterogeneous integration is clearly the way forward, and chiplets are the most flexible way of making that happen, particularly if they adhere to a standard way of connecting them all together. If this works as planned, design teams should be able to make choices based upon specs, price and end market needs, and assemble them relatively quickly.

Another big knob to turn is on the algorithm side. It’s one thing to co-design hardware and software at the same time to optimize both. But it’s quite another to rethink the structure of the algorithm, using things like domain-specific languages with restricted expressiveness, parallel patterns, and dynamic precision. Stanford University, for example, has been developing Plasticine, which allows parallel patterns to be reconfigured.

While the general rule of thumb is that things can be done faster in hardware than software, they can be done much faster in both if there is less processing that needs to happen. And this is where the innovation really needs to happen. While faster logic is always good, processing less (with existing or future logic) can be even faster and use less energy. And it ultimately could go a long way toward trimming both area and manufacturing costs, which could make the next big thing quite different.



Leave a Reply


(Note: This name will be displayed publicly)