The biggest challenges in the history of semiconductors have been solved. Now what?
Never before in the long and often turbulent history of the semiconductor industry have so many problems presented themselves at each new process node. And never before have there been so many well-tested choices to resolving them.
After possibly the most intensive, extensive and expensive research this industry has ever witnessed, Moore’s Law is now technologically assured down to at least 10nm. After that, we will enter the quantum world, in which measurements will be in Angstroms, techniques such as spintronics and electron tunneling will become the norm, and devices such as carbon nanotubes will be an integral part of transistors.
But for the foreseeable future—and two process nodes is a healthy margin in this industry—technology is available to solve the worst problems imaginable. Even without EUV, there is double, triple and quadruple patterning. Leakage current is under control with FinFETs and SOI. And modeling has become advanced enough—and banged around enough—that designers understand what can be done using different types of models, how they go together, and where the limitations are.
This is expensive stuff, of course, and it will drive a split in the industry—but not necessarily between the haves and the have less. There will be some companies marching forward on a single die without blinking—memory and processor makers, for instance—and some companies that will begin stacking chips developed at older process nodes on top of these new chips. There also will be some companies leveraging new advances at older process nodes.
It remains to be seen which approach offers a better ROI and in which markets. SoCs are now at the core of almost all high-volume devices, replacing what used to be a processor on a PCB. New packaging options basically shift everything from one chip back to multiple chips, but with much faster throughput and greater energy efficiency than if it was on a congested planar device. The big advantage of stacking is that it allows much more re-use of things like analog IP, and it extends the life of all IP. If it’s difficult to develop analog IP at 28nm, just imagine what it will be like at 10nm.
The biggest problem is on the verification side. The now commonly used practice of pushing what used to be done in hardware into software, because it’s too hard to develop in hardware within a given market window, is at best a Band-Aid solution. Already software teams are complaining that they’re missing deadlines because they’re forced to find deeply embedded bugs in the hardware that are affecting their code. That is slowing down progress in getting chips out the door, and landing buggy devices in the hands of consumers. At some point the choice won’t be which features need 99% coverage and which need 80% coverage. It will be which chip is likely to work and which one has failed.
Architectures also will have to be adjusted. Increased density causes routing congestion, physical effects such as noise, heat, electromigration, and it can impact latency, throughput and reliability. The next few nodes will require more advanced skills than ever before—system architects, system-level design and hardware-software co-design. And they will require more cooperation among more players in the ecosystem through a virtual IDM model, if not a full IDM approach.
But the good news is that all of this can be done using research now underway and advances of tools that are logical rather than orthogonal, and there is enough demand for electronics on the horizon to pay for these kinds of advances for many years to come. So even though economic prospects in the short term may look rather mixed, the long-term picture is extremely upbeat for those willing to take on new and very interesting challenges.
—Ed Sperling
Leave a Reply