Moore’s Law At 50

It’s time for the semiconductor industry to rethink what it’s trying accomplish and for whom.

popularity

Moore’s Law turned 50 this week…but not because of Gordon Moore. He observed that the number of transistors crammed onto a piece of silicon was doubling every 18 to 24 months and predicted that would continue to be the case. He was right, but it took many thousands of engineers who created methodologies and tools to automate the design and equipment to manufacture complex chips to make that observation a reality. Otherwise we might still be using single-core chips developed at 1 micron.

One interesting facet of Moore’s Law is that it has been interpreted as both an economic and a technological statement. It’s not one or the other. It’s both. If you’re doubling the number of transistors in a fixed time period, you need a good reason to change the device structures as well as the number of devices. Both parts need to work for Moore’s Law to remain viable, and at the moment only the technology side is working.

After the 28nm node, the cost per gate no longer decreases at each new process geometry. And while the number of transistors does indeed double with finFETs at 16/14nm, it’s hard to say that’s really the next process node after 28nm. Leakage was so bad at 20nm that the semiconductor industry—in this case a collaboration of foundries, EDA companies, equipment makers and chipmakers—had to change the planar transistor structure to finFETs to control leakage. So whether that is really 20nm, 16/14nm, or some hybrid is questionable. And whether that is in keeping with Moore’s Law is a matter of debate.

Putting that bit of confusion aside, though, the next question is whether Moore’s Law can continue under its original premise of reducing costs and increasing density. This all appeared very obvious back at 45nm, when EUV lithography was anointed the obvious successor to 193nm immersion. And it was equally clear when interconnects were big enough for electrons to easily flow through them without crashing into each other or heating up wires that are too thin to handle them. New materials will help. So will some next-generation lithography choices, improvements in tools, methodology changes as defined by the whole shift-left mantra, along with a focus on more up-front choices.

Engineers by nature like challenges, and staying on the Moore’s Law road map has proven one of the biggest collective engineering challenges in history with one of the best-defined end goals. But sooner or later, even engineers have to come to grips with the fact that there are fewer atoms to work with and that quantum effects will begin to dominate designs in dense, very tight (as in several nanometers) spaces. There will still be progress, but it will be made in multiple other ways than just shrinking features. When that happens—and some argue it already is happening—the PPA metrics will have to change.

Take cost, for instance, which is the area component of PPA. While numbers appear to be incontrovertible, they do need to be put in context. At advanced process nodes they need to be viewed from a much higher level of abstraction. It’s not just about NRE or the individual silos within a chipmaker. It’s about design through manufacturing and out into the field, where reliability is a cost that can be measured over time. Automakers, for example, will pay an extra 10 cents per chip if they don’t have to replace it in three years or force customers to undergo a firmware update at the dealership. Data centers will pay more for servers if there is an offset in thermal/cooling/power costs and improved throughput. And industrial companies will pay more for chips that can be proven to save money. Moore’s Law is all about value, but that value hasn’t always extended as far as the end customer.

Looking at the other part of the PPA equation, performance and power used to be a rather neat tradeoff prior to 65nm. That’s no longer the case. From wearable devices to data centers, concerns vary greatly depending upon use models, applications, battery size, cooling/energy costs, what needs to be processed and how quickly—and they sometimes vary within the same organization from one minute to the next. An online financial trading company may care greatly about the cost of cooling a data center, but when the stock market is gyrating wildly it’s far more concerned about performance.

Rather than just shrinking features every couple years, the semiconductor industry needs to take a step back and figure out what it’s trying to accomplish, for whom it’s trying to accomplish that, and what is the best way to get there. In some cases, notably in processors (APU, CPU, GPU, MCU, FPGA) it makes enormous sense to keep increasing density. For other applications, maybe not.

As with complex SoCs, no single approach works anymore. And that may be the biggest change to affect Moore’s Law in its 50-year history—for whom is it still relevant and why?



Leave a Reply


(Note: This name will be displayed publicly)