Scaling At The Angstrom Level

Shrinking features will continue, but not everywhere and not all the time.


It now appears likely that 2nm will happen, and possibly the next node or two beyond that. What isn’t clear is what those chips will be used for, by whom, and what they ultimately will look like.

The uncertainty isn’t about the technical challenges. The semiconductor industry understands the implications of every step of the manufacturing process down to the sub-nanometer level, including how to create new materials that can withstand a narrow range of temperatures or disappear entirely without a trace.

The real problem is cost, and how economies of scale will play out in the future. From the standpoint of being able to design and manufacture chips that work, the industry appears to have some pretty good options for pushing on to the next three or four nodes. From the standpoint of commercial viability, there are some looming unanswered questions.

At the very least, logic chips or chiplets with very regular structures and extremely high density will be possible, and they likely will be required for heavy computation applications involving AI, machine learning and deep learning. However, it’s less likely that we will see complex SoCs with a mix of processing elements such as analog and security functions, various on-die memory blocks, and multiple I/O configurations at 1nm.

What’s changed significantly over the past several process geometries is that there are a slew of options for improving power and performance than just scaling, and many of those options are now are silicon-proven. There are at least a half dozen mainstream ways to package chips/chiplets together, with more on the way, and it’s not hard to envision a world in which chip vendors customize solutions relatively quickly based upon price, power, performance, and even regional standards. So while a chip developed for a high-computation server may require the latest 2nm logic density, it may sit alongside a 16nm SerDes, a 28nm power module, and a 40nm security die.

The key in all of this is figuring out where the bottlenecks are for performance, and then addressing them individually using the best tools available in the most cost-effective manner. A system will only run as fast as the slowest component in that system, whether that’s an I/O, a memory interface, or whether it’s an overheated logic block that needs to be shut down before it goes into thermal runaway. In some cases, it may require an entirely different architecture where processing is done either in or closer to memory. In other cases it may be more hardware-software co-design, with the entire design optimized for a system or a package. Those decisions can be made much closer to production time using a chiplet/dielet/tile approach, providing there is a consistent way of characterizing these devices and hooking them together.

This extends well beyond the chip, too. It’s a system-level approach that can include everything from the PCB to the communication infrastructure between chips or between servers in a rack. The difference is there is no longer just one way to solve these problems. There are now multiple options, and many of them are more flexible and much more granular than what was available in the past. So while scaling will continue, it’s now just one of a number of different options necessary to create the optimal solution for a particular application.

Bottom line: The magnitude of this shift should not be underestimated. It’s a potential game-changer for the entire tech industry.

Leave a Reply

(Note: This name will be displayed publicly)