Uncertainty Ahead

FinFETs hold huge promise, but they also raise some questions for SoC makers that haven’t been answered.


If finFETs work as planned, it’s likely they will show up in every complex SoC for decades to come. Adding another dimension to transistors has enormous potential at advanced nodes, and maybe even at older nodes.

3D transistors also could be part of stacked die, and they can be combined with fully depleted SOI—two other options for reducing power. Moreover, it’s likely that whatever GlobalFoundries, TSMC and Samsung agree upon is what most fabless companies will utilize. And while companies may not move to the most advanced nodes as quickly as the foundries and EDA companies would like, they will use the latest technologies for controlling leakage current if they’re commercially proven, cost-effective and readily available as part of a complete solution.

FinFETs still have to prove themselves on all three counts, however. And no matter how large the hype factor, that’s a big challenge.

To begin with, test chips are not commercially available chips, and Intel processors—the first to commercially employ finFETs—are not SoCs. Intel’s processors are very regular structures with very regular layouts, shapes and well-defined rules. That makes for much more predictable yields.

For finFET-based SoCs, yield isn’t so simple to determine. It may be solvable with current tools and approaches. It may not. We don’t know. And we don’t know how lithography will affect yield and cost. EUV, which was slated as the replacement for 193 immersion technology at 45nm, now appears to have missed the 10nm node window. What is the technological viability of a quadruple-patterned finFET with 14nm back-end of line process? Will yields be within expected parameters?

Second, how much will it cost to really develop these chips? We are standing on the precipice of a new technology that has to deal with the same kind of power density issues as the previous planar technology. While getting the heat out of stacked die is a known problem, thermal patterns are far less well documented with tightly packed finFETs. These are like mini towers on a substrate that can trap heat, which can affect power budgets. Leakage is certainly reduced at the gate, and there is a potential for reducing voltage. However, there are still the same RC issues with interconnects and wires, not to mention noise.

These are solvable issues, but they cost money to solve. Companies like Intel, IBM, Apple and Samsung command a premium for their processors, which means they can absorb the costs for these new technologies. If new chip technology can save data centers millions of dollars in electricity costs per year, it’s worth every penny of investment. Likewise, if those chips will be sold in volumes of hundreds of millions of units, the cost can be amortized more easily. But how about a run of 10,000 chips or even 100,000 chips? The economics are vastly different.

Third, all of this work is just beginning. Planar transistors have been subject to decades of intensive and incremental engineering work. 3D transistors are brand new. And while they may be a major breakthrough in all respects, it will take time to understand all of their quirks, to automate the design and verification flow, and to improve the manufacturing and cost equation.

This is interesting technology with huge promise, but prime time for its rollout will depend on the market, an individual company’s tolerance for risk, and a lot of unknowns that have yet to be discovered.

—Ed Sperling

Leave a Reply

(Note: This name will be displayed publicly)