Moore’s Law 2.0

The observations made famous nearly five decades ago are being called into question again, but this time for very different reasons.


By Ed Sperling
Doubling the number of transistors on a piece of silicon every 18 to 24 months used to be synonymous with engineering progress, but as the semiconductor world migrates from processors to SoCs the fundamental basis of Moore’s Law is losing its meaning.

Even its famous timetable is slipping. For one thing, it’s simply too expensive and difficult to migrate from one node to the next every couple years. The investment required in capital equipment alone has thinned out the ranks of foundries operating at advanced nodes, and repeated delays in bringing EUV to market—at least a commercially viable version of the power source—mean that design costs will increase significantly, as well.

Double patterning already is required at 20nm, and multi-patterning will be required at 14nm and beyond. That explains why many companies are hanging back at 28nm, making the most of that process node with materials such as fully depleted silicon on insulator (FD-SOI), techniques such as back or body biasing, and why foundries say they are exploring the option of introducing finFETs at that node.

“28nm is a very important node for us,” said Young Sohn, chief strategy officer at Samsung Electronics. “We expect Moore’s Law to shift from (doubling the number of transistors) every two years to every three or four years.”

STMicroelectronics already has put a stake in the ground at 28nm with FD-SOI, and Broadcom has said it will not move forward at the same rate as in the past. On top of that, companies have begun jumping nodes, making it difficult to chart their progress.

“It’s a high-level club,” said Wally Rhines, chairman and CEO of Mentor Graphics. “Some companies stopped pushing that edge at all, while others are pushing everything they can—but those companies need to have a single die that does everything or the power will run out of control for them and so will the cost. They have very special requirements, and they’re competing in markets where the stakes are quite high and so are the rewards. But that’s a different world from where most companies play.”

Still, that’s only part of the equation. As more functionality is added onto chips, the contention for memory, buses and the increased RC delay from thinner wires and interconnects means that just doubling the number of transistors doesn’t necessarily have the desired effect anymore. In fact, the amount of heat generated from the resistance in thinner wires is enough to affect everything from signal integrity to the overall device power budget, which extends even beyond the SoC.

“Just because you have more transistors doesn’t necessarily mean you have better performance or lower power,” said Qi Wang, technical marketing group director at Cadence. “The real issues are functionality, computation and power. The transistor is only a means to an end.”

What really matters
The current thinking in the design world is that Moore’s Law is little more than a reference point, and a vague one at that. If more real estate is freed up every couple years—or even every four years—then more can be done in that space. Area is one of the three primary metrics in SoC design, along with power and performance.

“Moore’s Law really is about economics,” said Chris Rowen, CTO of Tensilica. “Just because there are two times as many transistors doesn’t mean anything. What people always have done is transmogrify more transistors into more of what they care about, which is application performance and functionality. You don’t just want more processors. You want more throughput and faster execution, and that’s not a trivial thing to achieve. And with memory systems, you want shorter latency and access to larger memories.”

How all of this is architected depends on the application. A GPU, for example, may focus on out-or-order execution with dynamic scheduling, while an image processor would focus more on wide parallelism and getting more done with a single execution. And adding cache to supplement storage allows the data you want to recall the most quickly to be closest to the processor.

Better utilization of space
Freeing up silicon real estate is one of the reasons that so much functionality has migrated from separate chips on a PCB into a single system on chip. The classic example of this is the smart phone SoC, which contains everything from video and audio subsystems to a phone, a GPS and a general-purpose processor for mail and search.

The goal in the future is to add even more functionality, but not just by adding in more transistors.

“The next big target likely will be artificial intelligence,” said Cary Chin, director of marketing for low-power solutions at Synopsys. “If you look at the iPhone, Siri is a first step in that direction. But getting it to really work well will require more compute power, more storage, and much more efficiency. The number of transistors can’t measure that. Back in 1965, when Moore’s Law was first introduced, you could only measure circuits based on the number of transistors. There wasn’t enough processing power then, but over the years we’ve had more and more interaction with our devices. We’re seeing that with digital video today, and more AI is coming. That puts us against the idea of what’s the human part of this equation.”

The next 10 years
What’s interesting about Moore’s Law is that it originally was supposed to be an indication of what would happen over the course of a decade. It has been extended every decade since then, and the current view is that from a technology standpoint, there is a path clear down to at least 6 or 7nm.

It’s uncertain what will actually be manufactured using that technology, how it will be made, or who will be able to afford to design build them. An alternative approach is to stack die to gain some of the same benefits, or simply to improve what’s offered at existing nodes. But one thing is for certain—Moore’s Law will be talked about for decades to come, even if it isn’t the metric itself becomes less directly relevant.