Pick A Number

What will be the next big process node is unclear. It may not even matter.

popularity

For the past two years there was some mumbling that 16/14nm would be short-lived, and that 10nm would be the place that foundries would invest heavily. Now the buzz is that 10nm may be skipped entirely and the next node will be 7nm. After all, 10nm is really only a half-node.

Or is it? The answer depends on who’s defining 10nm. The 16/14nm node is based on a 20nm back-end-of-line process, unless you happen to be Intel and IBM, which use real 14nm processes with 14nm finFETs. And 7nm may actually be more like 8nm or 9nm with a 14nm process. So 10nm may or may not be 10nm, depending upon the source of that process technology, and even some of those numbers may change slightly. Would anyone really notice if a process slipped by one or two nanometers?

If all of this sounds confusing, it should. While there is a certain amount of righteousness in declarations about who’s adhering to naming conventions, the reality is that after 28nm there is no longer a consistent way for naming nodes—is it line widths, back-end-of-line measurements, front-end-of-line measurements, or something in between? Some of this is marketing, some of this is metrology, and most of this is impossible for the average company to verify.

This may not matter a whole lot in terms of making chips. If a chipmaker can get a power/performance/cost improvement out of moving to the next-generation process technology, and reap benefits from increased density, the numbers probably don’t matter. And EDA and IP vendors have long given up on the idea that you can develop tools or IP for one foundry process and have it work on another foundry’s process at the same node number.

But that also raises some interesting questions. Given the choice between a real 7nm chip or a 10nm chip, which one will a system vendor choose? And why will they choose it?

It’s clear the semiconductor industry has a classification problem. It started when foundries couldn’t sell 20nm (despite the fact that a couple of major mobile architectures are based on the real 20nm process with planar transistors) because it was too hard to control leakage current. Adding double patterning to the design process was more work than it was worth for most companies if the leakage couldn’t be controlled. So rather than re-invent the process, semiconductor manufacturers added finFETs to control the leakage and called it 16nm or 14nm—except IBM and Intel, of course, which created new processes to go with the finFETs.

What comes next is anyone’s guess, and it may be subject to debate. But it’s obvious that standards either need to be redefined, or the industry needs to move onto some other metrics for delineating one vendor’s chips from another, such as compute cycles per watt or some updated version of MIPS/BIPS/Dhrystone/MFLOPS. Node numbers were a simple measurement of progress, but at this point they’re more confusing than helpful. That should be a sign that something needs to change.



Leave a Reply


(Note: This name will be displayed publicly)