Why Scaling Must Continue

But calling this an extension of Moore’s Law is a mistake.

popularity

The entire semiconductor industry has come to the realization that the economics of scaling logic are gone. By any metric—price per transistor, price per watt, price per unit area of silicon—the economics are no longer in the plus column. So why continue?

The answer is more complicated than it first appears. This isn’t just about inertia and continuing to miniaturize what was proven in the past. That formula doesn’t work anymore, and just shrinking the size of a processor and memory for the sake of increasing the density of those devices is a bad idea. There is too much dynamic power density, leakage and resistance that cause thermal effects, more noise caused by thinner dielectrics—the list goes on. And from an economic standpoint, the cost of manufacturing those devices doesn’t provide a sufficient PPA for a return on investment.

But there is an important benefit to continuing to shrink features. It allows more room on a chip for more and different processing elements, different memory types, and more heterogeneous architectures. An increase in data, and the need to pre-process at least some of that data locally—there will be gradations of what “local” means for different markets—requires much greater throughput and faster processing of all data types.

The irony here is that scaling used to be viewed as an alternative to architectural and microarchitectural innovation. It is now being viewed as an enabler. So while architects were left on the sidelines of scaling prior to 7nm, they are now driving it, and for good reason. Some processors work better for processing certain types of data than others, and they work better in arrays than as individual processing elements, and it’s up to the architects to make all of that work together.

Clusters of GPUs can process streaming data better than a single GPU, but they are too power-hungry to use everywhere. Clusters of DSPs can do the same for sound, but they’re not very good at classic number crunching. And then there are embedded FPGAs for programmability and security, TPUs for accelerating specific algorithms, and possibly some microcontrollers thrown into the mix. MCUs require less power than a CPU, but a CPU is better for coordinating a variety of functions, such as what gets processed where, what gets prioritized, and how all of this gets orchestrated in time, which is often measured in nanoseconds.

At 28nm, all of this had to be done across a PCB. At 3nm, this can happen on a single chip, or with multiple chips in a package. There can even be printed sensors on die or on packages to extend that functionality, and there can be systems in package to accommodate additional functionality.

The problem is that all of this requires physical space, and this is evident in many of the new AI/ML/DL chips being developed today. While performance can be ratcheted up by orders of magnitude with new architectures that pair small memories with processing elements, most of these chips are reticle size or larger. In some cases, chips are being stitched together, which creates its own set of issues.

Scaling and various advanced packaging approaches are two solutions to the same problem, namely that the amount of data that needs to be processed is skyrocketing, and the winning solutions are those that can process data fastest. Where these two approaches diverge is in how they trade off time to market and flexibility with the power and speed of a single ASIC. Regardless of how they get there, they both provide a way to increase density of digital circuitry.

There is a vast gulf between this new mandate for heterogeneous integration and the original intent of scaling as defined by Moore’s Law. It involves a complex mix of functionality from different vendors, and whether it’s on a single die or on multiple die, this is closer to razing a neighborhood and building a smart city than trying to partition existing houses to add smaller rooms.

Related Stories
Node Within A Node
Reducing process margin could provide an entire node’s worth of scaling benefits.
Advanced Packaging Options Increase
But putting multiple chips into a package is still difficult and expensive.
Advanced Packaging Knowledge Center
Top stories, special reports, white papers and blogs about Advanced Packaging
Sidestepping Moore’s Law
Why multi-die solutions are getting so much attention these days.
Power, Reliability And Security In Packaging
Why advanced packaging is now a critical element in compute architectures, 5G, automotive and AI systems.
Focus Shifting From 2.5D To Fan-Outs For Lower Cost
Interposer costs continue to limit adoption of fastest and lowest-power options, but that’s about to change.
What’s The Right Path For Scaling?
New architectures, packaging approaches gain ground as costs increase, but shrinking features continues to play a role.
What’s Next In Advanced Packaging
Wave of new options under development as scaling runs out of steam.



3 comments

Ram4 says:

Great article

Hong says:

If scaling can make money, it will continue, market will drive that to happen.
If scaling cannot get the return of investment, who will do it?
Financial decision will stop scaling before physics will.

Tanj Bennett says:

A good topic. It is not so much that scaling has enabled architecture innovation, as that the differences in scaling require new approaches. Memory is not scaling nearly as fast as logic, plus the end of Dennard scaling means that the new smaller devices come at higher heat flux per sq cm. The implication of that is finding more efficient algorithm specific architectures, especially ones which integrate data movement more efficiently into the pipeline. The classic CPU + cache hierarchy gets only a few percent better for each doubling of the cache size so that is a poor use of resources, no easy way out. To really create valuable speedups someone needs to design a logic+memory block in FPGA or ASIC.

This new scaling requires innovators upstream in figuring out the problems worth solving and the algorithms to solve them, and downstream in the power distribution, packaging for signal density, and packaging for heat removal. It is a vertical integration.

Leave a Reply


(Note: This name will be displayed publicly)