Is There A Crossover Point For Mainstream Anymore?

A flood of options and custom solutions is taking a toll on economies of scale.

popularity

Until 28nm, it was generally assumed that process nodes would go mainstream one or two generations after they were introduced. So by the time the leading edge chips for smartphones and servers were being developed at 16/14nm and 10/7nm, it was assumed that developing a chip at 28nm would be less expensive, less complex, and that the process rule deck would shrink.

That worked for decades. Then the leading edge development hit a wall, the market for smartphones flattened out, and new markets like AI/ML, 5G infrastructure, automotive, and the edge in general, began demanding new architectural approaches because scaling no longer provided enough improvements in performance and power. Instead of an orderly progression to the next few process nodes, we now have half-nodes, nodelets, and a long list of advanced packaging options that will work with just about any node or nodelet.

So rather than hardware defining software, or even software defining hardware, data types, data volume, and end applications are now defining both hardware and software. That has upended the entire semiconductor industry, opening doors for chips in places where they have never been used, but putting an emphasis on unique ways of putting those chips together with other chips. A processor, no matter how many cores it has, is now just one more compute element in a complex combination of accelerators, memories, and various types of interconnects. Moreover, they may either be connected on a single planar chip, or packaged vertically or horizontally using a number of different approaches ranging from fan-outs, system-in-package, memory on interposer on logic (2.5D), pillars on fan-outs, or many other types of combinations.

There are great benefits to being able to push the boundaries of computing, both from a speed and power perspective. But they all come at a cost. Economies of scale require repetition of proven methods for putting all of the pieces together quickly, and with confidence they will work as expected throughout their given lifetime. To make matters more complicated, some of the leading edge chips now are being deployed in safety-critical applications, so lifetimes are now measured in 10 to 20 years rather than 2 to 4 years.

None of this bodes well for squeezing costs out of design, various manufacturing processes, and final test. Some of the nodelets are as-yet unproven, and IP vendors have balked at developing IP at every digit down to 2 nm (the exception is that there isn’t a 13nm) because the payback is uncertain. Without third-party IP, these nodes aren’t viable. And without enough volume, it’s difficult to mature the process or find automation tools specifically developed for those nodes.

But it also raises a more far-reaching question — what will be considered a mainstream process in the future? There are several different possibilities. One is that nodelets fade away and the industry continues from 7nm to 5nm, 3nm, 2nm, and whatever number comes after that. The second is that volume manufacturing shifts away from scaling toward packaging schemes where certain chips have been proven to work, which is what the OSATs are betting on, and where large foundries and IDMs such as Intel are heavily investing in various bridges and packaging processes. The third is that the architectures themselves become the dominant approach to scaling — optimizing both hardware and software — with the emphasis on lower-cost compute elements in some standardized package.

These are very different business models, and tens of billions of dollars of investment are riding on various bets placed across all of these approaches. So far, however, it’s not at all clear which will be the next mainstream approach. And that is keeping a lot of people awake at night.



Leave a Reply


(Note: This name will be displayed publicly)