It’s technologically possible, but the whole industry may have to change.
The chip industry is determined to manufacture semiconductors at 3/2nm — and maybe even beyond — but it’s unlikely those chips will be the complex all-in-one SoCs that have defined advanced electronics over the past decade or so. Instead, they likely will be one of many tiles in a system that define different functions, the most important of which are highly specialized for a particular application.
The SoC, which has dominated smartphones and server chips since just after the millennium, has been disaggregating bit by bit for the past four years. Apple kicked off the trend in 2016 by incorporating fan-out packaging in its iPhone 7, moving some analog functions off-die and into a package. Now, chipmakers and OEMs are considering what digital functions are critical enough to stay on the same die, and which can be connected using some high-speed interconnect, such as a thick copper bond between two die or a highly specialized bridge.
The current trend points to highly regular, redundant structures with enough margin built in to be able to repair functionality in a way similar to error-correcting code is used for memory. The problem will be figuring out whether any of that will be good enough to last for a decade or more in some applications, and whether traditional approaches will need to be adjusted to deal with such tiny features.
One of the issues has to do with a tradeoff around tolerances. If features are too small to use existing probes or inspection tools, then they will have to use some exotic but much slower inspection tools — or they will require more margin. But as voltages are lowered and performance increases through a variety of methods, such as shorter distances between processors and memory, and much more specialized accelerators, margin can offset the improvements in both power and performance.
A second issue involves power. This issue comes in two primary flavors. One involves the ability to get power into a device, which is a problem in complex chips where the transistors are so dense that even feeding enough power into the device is difficult. The second has to do with heat from increased resistance and from processing in action. Much of this needs to be assessed through on-chip and off-chip monitors, but it also needs to be inspected to see if there are latent defects that could cause problems over time.
It’s not clear at this point whether those defects will show up with random testing and inspection, even when coupled with data analytics, or whether they will require more regular inspection and testing. But at 3/2nm, and whatever number comes after that (1.5/1nm, or some angstrom measurement), normal testing and inspection may require entirely new equipment and more time per test, which would add significantly to the overall cost of developing chips at these process nodes.
All of this is, of course, possible. Whether it is economically feasible is unknown. The chip industry has a clear path to develop chips over the next few nodes. But what also isn’t obvious is how much the industry will have to twist and bend to make all of this possible, and that affects the cost.
Does it really matter if logic is at 1nm or 3nm, particularly if the world pushes toward more specialized accelerators, new architectures and tighter hardware-software integration? Time will tell. But the economics of making that decision are increasingly wrapped around reliability and extended lifetimes for chips, and it’s not clear at this point which is the best route to get there.
Leave a Reply