The Future Of Moore’s Law

Experts at the table, part 1: Time between nodes is growing longer and the cost of getting there is rising. Is it time to rethink what we’re measuring or to rev up innovation?

popularity

Semiconductor Engineering sat down to discuss the future of Moore’s Law with Jan Rabaey, Donald O. Pederson distinguished professor at UC Berkeley; Lucio Lanza, managing director of Lanza techVentures; Subramani Kengeri, vice president of advanced technology architecture at GlobalFoundries; Charlie Cheng, CEO of Kilopass Technology; Mike Gianfagna, vice president of marketing at eSilicon; and Ron Moore, vice president of marketing for the physical IP division at ARM. What follows are excerpts of that conversation.

SE: It’s harder to see an economic benefit from scaling in accordance with Moore’s Law after 20nm. What changes as a result of that?

Lanza: Gordon Moore said that would not end as long as you can see two more steps ahead. You never can see 10. I wouldn’t say Moore’s Law is ending, but the challenge of getting to the next node is more difficult to meet. If we assume that’s the case, the time between one node and the next might increase. When innovation slows down, full utilization happens. That can’t continue, but it helps in the short term. So that’s one change. But another way of looking at it is that, in the past, there were always a number of variables—density, speed, number of transistors. What has stayed constant is that the semiconductor content was impacted by those variables, and now that impact is being extended. Every time we move to the next node there is another challenge, and there is always a design challenge. That design challenge is going to be there.

Moore: The only thing that’s hurting us is the time. If you look at Moore’s Law as an observation rather than a mandate, we will get there. Our foundry partners are taking a little bit longer. They’ll put their processes out there, but then they’ll do midlife upgrades. But the value proposition is there, and it takes that second wave to get to the right cost point with the benefits of scaling, but eventually it gets there. So Moore’s Law might not be satisfying tier one or the early adopters, but it’s certainly getting there for the industry.

Kengeri: The original law was that every two years you would be able to reduce the cost by roughly 50%. The innovations in the last few generations have been great and people have come to start expecting that while the area will decrease by 50%, maybe the cost will increase 20% to get there, with a net savings of 30%. That’s great and that’s the way it has been for a long time. It definitely has slowed down, though. If you don’t care about cost, you can always get there. But the problem is, can you get the die shrink at the right cost? That is what is slowing down a little bit. And there is huge impact because of that. Most people aren’t seeing it yet. It takes time. Moore’s Law percolates into every part of the value chain. It takes time for companies to understand the impact and for the analysts to calculate their estimates. With some of the advanced chips, what you don’t always see is that the chips are being sent back to the equipment makers for better tooling and metrology. That’s a cycle. All those things get slowed down. We look at it as PPCS—power, performance, cost and schedule. We’ve added the S recently. It has to happen in a timeline. Cost is what we’re talking about now, and that is slowing down and will have a lot more impact.

Cheng: Whether you want to characterize it as Moore’s Law being dead or slowing down, there’s are a number of reasons for that. With the cost of the fab escalating, risk is very high, and those working on the manufacturing side are no longer interested in risk. As a result, we see a lot of material science and equipment innovations, but those are hitting the wall, too. There isn’t as much innovation in semiconductors, either. CMOS transistors have not changed. SRAM has not changed. The way all circuits are derived above that has not changed because it costs too much for the fab managers to say they’re going to spend $7 billion for one dependency that has not been done. SRAM has not changed in 30 years, and everything else has to change to make up for Moore’s Law.

Rabaey: There are a couple factors that are involved with projections of Moore’s Law. One is, how many transistors you can put on a die. That will change all the time. It will keep on going. We will see a lot more transistors. You can stack them up in 3D . You have more transistors, and you’ll be able to keep them growing at the same rate. The other challenge is that the basic devices we’re playing around with are not behaving as well anymore. But that was never part of Moore’s Law. That was Dennard scaling, which said you make the device smaller to get there. That has slowed down over the past generations. This is where innovation has to happen. It has to allow more functions, whatever that means over time. But no matter what, we will keep rolling forward. We were dominated by one industry, which was the processor industry. But there are so many devices out there these days that have different needs, and this is where the innovation will go and where we will keep scaling function. You’re going to get more functionality for the same cost.

Gianfagna: Our point of view is there are at least a couple more nodes out there. Moore’s Law has been about scaling of geometry and complexity at equivalent cost. Given photolithographic semiconductor manufacturing, does it have to be the same photolithographic process? If you look at progress with EUV, maybe it doesn’t. But if you think about other carbon nanotubes and all kinds of other technologies for building transistors, we can continue to deliver the same complexity. It may not be in the same form or with the same materials, and it’s going to be really expensive, but if you don’t make the shift and you hit the wall, that’s worse. Another thing to think about, is raw performance what really matters anymore? There’s another dimension here, which is parallelism. That’s far more important. How do we deal with parallelism and throughput across massively parallel architectures? That’s a whole new set of innovations that we’re going to see.

SE: That’s an issue that’s been around since the 1960s. So far, no one has solved it other than for databases and embarrassingly parallel applications. Are we now at a point where we can do it?

Rabaey: There are a lot of applications in my data center being done in parallel today. The real problem is that architects go in kicking and screaming and say it can’t be done, but it can be done. There is massive concurrency out there and you need to go for it.

Kengeri: Area shrink has been monetized and it’s very visible. But can you monetize power reduction and performance improvement, or cost per function or cost per user experience? If you can, then it will continue. If the whole ecosystem can monetize those, then you have more leverage.



1 comments

billwaller42 says:

All far to esoteric for me. Moore’s Law’s claim to fame was simply that it was blindingly obvious to everyone in the industry that the best selling DRAM was quadrupling every 2-3 years. It was mind-blowing to us all that it could be happening so fast: “Life in the Fast Lane”/”Future Shock” for real!
Would love to see a simplifying graph of just the sequence of DRAM (1K, 4K, …4G) unit price vs time, with the cross-over-price-transition-event-time indicated. Come on bean-counters, show this key history to us!
Old retired fab-rat from Dallas.

Leave a Reply


(Note: This name will be displayed publicly)