Bigger Wafers, Bigger Risk

The move to 450mm wafers is under way, but nagging questions about ROI remain.

popularity

At 22/20/16/14nm the semiconductor industry is experiencing a rather new twist on Moore’s Law. Smaller, as in smaller feature sizes, is no longer assumed to be cheaper—or at least not for everyone. In fact, the cost per transistor for the first time in more than half a century could rise in some cases.

Whether this outlook improves as the semiconductor industry gains more experience with finFETs and the newest processes remains to be seen. There will be some economic efficiencies, for sure, but the best guess is they won’t be as large as in the past due to double patterning, more complex design costs, and a variety of other physical and electrical effects.

There were indications of problems ahead back at 90nm, when classical scaling ended. At each new process node before that, a feature shrink was associated with a doubling in performance and a reduction in power. After 90nm, the numbers started dwindling.

But that was only the advance guard. At 20nm, with double patterning, and more recently the introduction of finFETs, the number of design issues that have to be resolved and verified is exploding. It takes more man-hours to produce a chip, more tools, more metal layers, more mask sets, more re-spins, more time—and more money. In fact, it takes lots more money, from the design all the way through to the fab equipment used to make the chips. And even then, the level of confidence that designs will work is going down as complexity goes up.

The solution being floated around now is to build a bigger wafer. If you can increase the size of the wafer, then you can amortize the cost of double, triple and quadruple patterning over the entire wafer, which in turn evens out because the number of dies increase. That works fine mathematically, and mathematicians will remind us any chance they get that math is the only pure science.

Engineering is a science, as well, but it doesn’t rely on pure formulas. It’s a collection of incremental advances and systematic approaches to problem solving, and when you have 2 billion transistors on an SoC—and multiple power islands and voltages, clock domain crossing issues, electromigration, electromagnetic interference, thermal interference, memory contention, layout issues and insufficiently characterized IP—there is a huge challenge to verify that it all works. The functional and physical components of a chip need to be understood and digested by engineers working under extremely tight deadlines with software that may or may not work properly out of the chute.

Add to that uncertainties about yield, test, and random process variation at very deep (as in future single-digit) submicron geometries, new stress effects and mask issues with double, triple and quadruple patterning. Suddenly, what looks look a simple math formula isn’t so simple anymore. In fact, it’s not even a mathematical formula. It’s a risk/reward model based upon some very uncertain assumptions and some very real risks.

For companies such as Intel and Nvidia and some of the memory makers, this can be controlled with very regular layouts. For companies such as IBM and Apple, it can be absorbed by the end-device cost. But for SoC makers, particularly those racing for a socket with complex design, the risk of pushing to the latest node is high enough. To use a 450mm wafer may or may not improve that equation.

The ROI of moving forward with 450mm wafers will continue to be scrutinized, particularly on the most advanced process nodes. There are some very smart people working on this problem, and they often come up with solutions no one anticipates at the outset. But this is hardly a straightforward formula anymore, and no matter how clean the math may look or how linear the path may seem, reality still could turn out to be much different.



Leave a Reply


(Note: This name will be displayed publicly)