Rethinking The Scaling Mantra

New technologies and approaches will radically alter how we view scaling in the future.

popularity

What makes a new chip better than a previous version, or a competitor’s version, has been changing for some time. In most cases the key metrics are still performance and power, but what works for one application or use case increasingly is different from another.

Advancements are rarely tied just to process nodes these days. Even the most die-hard proponents of Moore’s Law recognize that the benefits of planar device scaling have been dwindling since 28nm. Intel, AMD and Marvell have shifted their focus to chiplets and high-speed die-to-die interconnects, and all of the major foundries and OSATs have embraced multi-die, multi-dimensional packaging as the path forward.

Unlike in the past, when 30% to 50% PPA improvements were possible at each new node, scaling beyond 7nm provides a 10% to 20% improvement in performance and power — and even those numbers are becoming more expensive to achieve. The price tag can be as high as a couple hundred million dollars for incremental engineering time, IP, and EDA tools at 5nm or 3nm. To make those chips also requires billions of dollars for new process technology, DFM tools to reverse engineer what is required to achieve sufficient yield, and new equipment to be able to continue moving wafers through the fab with sufficient speed.

From a technology standpoint, digital logic will scale beyond 1nm if there is sufficient demand for it. The big question is whether that demand will exist.

Part of the reason for concern involves splintering of new and existing end markets, which increasingly demand customized solutions. Consumers would rather buy chips/packages that provide 100 times better performance for their particular application than one that has more transistors but worse performance and power. This is why standard benchmarks increasingly don’t apply. On top of that, it’s not always possible to get enough power into extremely densely packed logic to utilize all of the transistors on a die, and the continued shrinking of material thicknesses at Metal0 and Metal1 makes digital structures more susceptible to the kind of noise that analog engineers have been wrestling with for decades.

Fortunately, there are plenty of alternatives in place — possibly too many. But going forward, it’s clear that the real improvements in power and performance will be a combination of new architectures, more hardware-software co-design, and specialized accelerators with high-speed interconnects. And all of that will be tied to variable levels of precision, and weighed against such requirements as endurance, security, resilience, and the ability to bring customized solutions to market quickly.

The commercial rollout of quantum technology will complicate this even further. With quantum computers, the critical metrics are the number of qubits, the lifetime and quality of those qubits, and the accuracy of the computational results. None of this technology is being manufactured at leading-edge nodes.

So now the big question is whether consumers will begin drifting away from metrics with increasingly narrow applications, such as the number of general-purpose cores in a processor, the number of transistors packed onto a die, or the process geometry at which a chip was developed. But as the chip industry moves past device scaling as the key differentiator, it also has to figure out what will motivate the next wave of buyers. After 50-plus years of selling a single idea, this should be very interesting to watch.

Related Articles
New Architectures, Much Faster Chips
Massive innovation to drive orders of magnitude improvements in performance.
Making Chips At 3nm And Beyond
Lots of new technologies, problems and uncertainty as device scaling continues.
Challenges At 3/2nm
New structures, processes and yield/performance issues.



Leave a Reply


(Note: This name will be displayed publicly)