Experts at the table, part 1: Where Moore’s law will continue to work, where it won’t, and what the likely result will be.
Semiconductor Engineering sat down to discuss the value of feature shrinks and what comes next with Steve Eplett, design technology and automation manager at Open-Silicon; Patrick Soheili, vice president and general manager of IP Solutions at eSilicon; Brandon Wang, engineering group director at Cadence; John Ferguson, product manager for DRC applications at Mentor Graphics; and Kevin Kranen, director of strategic alliances at Synopsys. What follows are excerpts of that conversation.
SE: Where do we start encountering serious issues with Moore’s Law?
Eplett: Nobody is going to say Moore’s Law is over. It’s never going to break. But it is going to keep bending. The digital abstraction is getting more and more complicated. Our customers are starting to talk about NRE, which is really constraining design flows. NRE is a measure of the amount of effort necessary to do designs. It’s that plus the cost of the mask set and the IP costs. Some applications, particularly with 2.5D, are aimed at cutting down the NRE and mask costs. Performance is not a bottleneck. Power is. But it’s really NRE that is defining the business in the future.
Soheili: I agree, but the problem is bigger than that. NRE is less than 10% of the total cost. If you want to optimize NRE to be 8% or 7%, it still doesn’t address the whole need, which includes software and verification and all the layers you have to put in place. It’s integration and the cost of the execution of the chip or module. It’s how you assemble a system together and how you manage the whole infrastructure. All of those things come into play. Depending on the application, depending on the customer, and depending on the budget and how much stake you have in the end product and how much innovation you want to put in to solve these issues, Moore’s Law may have ended for a lot of people. And then there are others for whom it is never going to stop or even slow down. But there have to be other ways of executing to solve these issues.
SE: These are mostly the mobile, high-volume chips?
Soheili: Yes, plus extreme networking. You might see less of an issue from a bandwidth perspective, but it’s both the power and the bandwidth being addressed at the core.
Ferguson: I came in around 0.25 micron. At every node people said, ‘The next one is going to be hard but I think I can do it. And after that, no way…we’re done.’ We keep on marching. We’re working on 10nm (including a 10nm process). 7nm will be a challenge but we’ll do it. 10nm is in development today. The question is what else it will bring. Historically it’s not just about shrinking. We’ve seen other improvements along the way. We’ve seen larger wafer sizes and SOI. But as we go to these new processes, are we getting more than just a shrink. If we’re not getting the kind of improvements in power and cost reduction, then the issue will be how many people will go to these new nodes. It’s not economically feasible for everyone. Certain people will need them and always do it, but we’re not going to see a wholesale shrinking of features across the industry.
Kranen: There’s a line of sight for the most advanced customers to 7nm and maybe to 5nm. That’s really nomenclature for performance and power, as opposed to the actual geometries themselves. But that’s also raises the question about whether Moore’s Law is a technology statement or an economic statement. From a technology perspective, we can get there. Maybe we’ll go to carbon nanotubes and silicon wires, but that will depend on how much demand there will be for those features. On the economic side, there are some natural constituencies for continuing the curve. There are low power concerns and super high volume like mobile, as well as the big iron guys who want to go for integration and networking. The big question is whether those constituencies will continue to drive the ball forward. There are a few more nodes at least. There also will be sweet spots for other packaging, but 95% of the market will continue to ride basic silicon as opposed to other unique package strategies.
Wang: It’s not just about cost. It’s also about power. The NRE increase will be manageable as long as the volume remains steady, which is the premise for ‘the bigger get bigger.’ But what about the rest of this industry—not just the top five? That’s where things get interesting. Some are still looking for low power. Some are still looking for complicated chips. Networking is a good example. There’s also a new segment called IoT. Does that require an SoC, or is it better off with a simpler alternative? We see a lot of possibilities on the horizon and different ways of scaling.
SE: One of the things that made Moore’s Law affordable was the ability to build derivatives. That becomes harder at advanced nodes, and the economics are different, right?
Eplett: We’re looking at die re-use. We really do see a business where you have a die that wasn’t tuned exactly for an application. When I talk about NRE, it’s what does it cost you to get to the first die. We are trying to come up with die that are not necessarily targeted at specific customers, but which can be used across a few customers. It’s not a package.
Soheili: We do a lot of IP development at 16nm and 14nm and beyond, and the process changes are significant. Even at 28nm, going from HPM to HPC is significant. We’re going to see flavors of 14nm, too, from LPE to LPP. There are variations out of ST and Samsung. These are sophisticated changes, and with all the layout and circuit design and tightening of the rules it’s more complicated than it used to be. When you get to our main business, which is ASICs, those ECOs or derivatives are even more significant because closing timing and doing layout and dealing with all the integrity issues and power domains. When you make changes, those changes are no longer as easy as it used to be. It’s very impactful to schedule and cost. The implications are tremendous. I agree that getting to a derivative is not easy, but there are ways around that, and if you design it properly and think about the next generation of architectures and try to future-proof it a little bit, that forces companies to think through a couple more generations of derivatives where they might not have done that in the past.
Kranen: There are two different classes of derivatives. There are derivatives in the same node. With those, you can add some kind of ECO capabilities. You’ve got the same IP. You don’t have to port new IP. But once you cross that process line, even if it’s a minor process change, then you really have to do a rip up, and that almost feels like a new chip. That’s the real challenge. Can you stay within the exact same process or do you have to migrate it to get what you want?
Ferguson: One of the big differences, especially starting at 20nm and moving forward, the commonality across foundry offerings is vastly different. Before that, essentially it was the same technology. At 20nm, the process approaches they use are vastly different. You can’t just say you’re going to take this part and move it to a different foundry. You have to redesign large chunks of it.
Kranen: IP plays a big role. If you have USB or HDMI, you have a core set of IP. It may not be there in another process.
Eplett: The IP defines which process it has to go on.
To view part two of this roundtable discussion, click here.
Leave a Reply