There’s no shortage of demand for more compute power, but the underlying assumptions have to change.
The basic idea that more transistors are better hasn’t changed in more than half a century. In fact, the overriding theme of a number of semiconductor conferences this month is that we will never have enough compute capability or storage capacity.
In the past, when the number of transistors in a given area actually did double every 18 to 24 months, increasing density per square millimeter for the same basic cost was almost taken for granted. But three things have changed fundamentally, and they are forcing changes in the economic model that underlies Moore’s Law.
First, the customization of designs has made it impossible to make this profitable solely from a volume standpoint. This is why there is so much buzz around chiplets and different packaging approaches. If you can’t sell 1 billion units of the same design, then at least you can sell billions of chips based on the same basic platform. Intel, AMD, Marvell, TSMC, UMC, GlobalFoundries are all on board with this approach, and anyone competing at advanced nodes, or in safety- or mission-critical markets, is likely to adopt some elements of this approach. It’s simply too expensive to be developing advanced chips entirely from scratch.
Second, many chips are still running out of real estate. This is particularly true for devices with some intelligence built into them, which includes almost everything these days. Even chips that do nothing more than trash useless data require significant amounts of intelligence to not discard important data sometimes. That, in turn, requires more processing elements and memory, which take up space. So either chips have to be stitched together, or they need to be stacked using some version of advanced packaging.
Third, because these chips are larger and more customized, it takes longer to fully inspect and test them. That requires more sophisticated equipment to achieve the kind of resolution necessary to differentiate a defect from one that is not likely to cause a problem, and to find latent defects that likely will cause issues over time. All of this slows throughput in the fab, and makes the whole manufacturing process much more complicated. In fact, one of the big drivers behind chiplets is the ability to compartmentalize these kinds of issues, pushing the limit on scaling on some circuits but not others.
Add up all of these challenges and making chips becomes significantly more expensive. When smart phones hit $1,000, there were questions whether consumers would buy them. They’re now passing $2,000. But rather than cutting features, OEMs such as Apple and Samsung are now demanding that components function reliably for four years rather than two. The same trends can be seen in computers, cars and many other applications.
Put simply, if costs cannot be reduced every couple of years, the economic model needs to be spread out over longer periods of time. Making chips that are faster and more customized costs more money, but if the lifecycle of those chips can be extended, then the basic economics don’t change. This isn’t exactly Moore’s Law as it was written, but it’s an interesting market adaptation of the basic concept. Why be confined to just three dimensions when you can add a fourth?
Hi Ed!
I just added one phrase in the last sentence of the fourth paragraph to make it complete:
So either chips have to be stitched together, …..or stacked monolithically in 3 dimensions in the wafer fab,…… or they need to be stacked using some version of advanced packaging.
Transistors are really dirt cheap now, most of the problems lie in getting a design to market fast. As you say: transistors have not changed much in decades, but that’s also true of the tools and methodology. A good dose of AI in the design flow and things will get a lot cheaper.
This used to be called hybrid integrated circuits.
Thank you Ed.
The technology enabling factors are not making a revolutionary change. I proposed a new roadmap 2 years ago, and lost my job.
Not sure who wants the new 22nd century plans but the FFabs, equipment makers and technologist are not aligned.
Agree those transistors are cheap. Now, connecting all those transistors and subsystems is turning into the most important challenge in SoC design. It’s amazing to me how much SoC complexity has increased since I joined Arteris 10 years ago (almost to the day!).