Math Questions

The next couple process node shifts are all about power. But how much it will cost is a rather unstable variable.

popularity

The race is on. GlobalFoundries, TSMC, Samsung, IBM and Intel are all neck deep in research, test chips, variability, lithography and three-dimensional transistor designs.

For the first time, though, the goal very publicly has shifted from performance and area to energy efficiency. Being able to double battery life with existing performance over the next couple nodes could mean smart phones would last more than a day between charges—maybe several days—even with intensive computing.

This is possible today with a mixture of power management techniques, such as multiple power islands, dynamic voltage frequency scaling, near-threshold computing, and more efficient software. And those techniques will remain useful for squeezing every last microwatt of efficiency out of a design.

But not everyone has been working with those advanced power-saving techniques. In fact, many companies see the fastest way to market as allowing the process technology providers—namely, the foundries—to provide the power/performance improvements, as they have always done in the past. And at 28nm using fully depleted silicon on insulator, and at 20nm and beyond using finFETs—in conjunction with FD-SOI for Common Platform members Samsung, GlobalFoundries and IBM, and without FD-SOI for TSMC—there are big gains in efficiency to be had just through a process migration.

Still, there are new issues to deal with at 20nm/14nm—the so-called 14nm XM technology node—including double patterning, mostly because EUV lithography is now four process nodes late and 3D transistor design is much more complicated. But many of the kinks have been worked already to make this viable, and the processes will likely become much more mature over the next couple years.

So where will chipmakers get the most bang for their megabucks? Much of this is market-dependent, of course. You don’t create a $50 million chip if you don’t have the volume to support it. And you don’t skimp on a chip that could be the central processor in a billion-device market.

But how you get to relevant numbers for each market requires a lot of number crunching and some very complex equations. Is time (and money) better spent in double patterning and FinFET-based designs, or is it better spent on existing processes using FD-SOI and body biasing? Or do you develop more complex power-saving techniques at 40nm or 65nm, knowing you can re-use that expertise as the newer nodes mature?

Unfortunately, there are no simple
answers here. It’s a lot of math, and that math has a lot of assumptions, such as NRE, verification time, third-party and internal IP. This is benefit/risk assessment at its most complicated—more of a distribution than an equation with a single answer—and you have to wonder if the people doing the math can ever really understand all the variables in this equation. Just as SoC design has become so complex that it has surpassed the ability for any person to understand all of the pieces without some very sophisticated tools, so has the ability to figure out the best set of options.

So far, no tools have been developed to make that job easier. And in an industry where bad guesses can inflict mortal damage on a company, this is clearly a problem that needs to be solved.

—Ed Sperling



Leave a Reply


(Note: This name will be displayed publicly)