Experts At The Table: Obstacles In Low-Power Design

Second of three parts: Stacked die, why and when companies move to the next process nodes, node skipping; who goes first; the intricacies of mixed-signal design.

popularity

By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss low-power design with with Leah Clark, associate technical director at Broadcom; Richard Trihy, director of design enablement at GlobalFoundries; Venki Venkatesh, engineering director at Atrenta; and Qi Wang, technical marketing group director at Cadence. What follows are excerpts of that conversation.

LPHP: What effect will stacked die have on EDA?
Wang: Different areas of EDA will converge. The packaging problem will become an SoC problem. Traditionally SoC engineers don’t worry about the packaging. With 2.5D and 3D, they have to understand the problems and learn the tools. People will learn.

LPHP: Packaging becomes a foundry issue as well, right?
Trihy: The chip goes in the package, but certainly the interposer technology and the TSVs do open up some other possibilities. It decreases time to market by virtue of the fact that you can have a chip that’s already taped out and already in production. One of the chips can go on top of the interposer, and then you only have to design something smaller to add unique value. It’s not clear what impact this will have on power. It will certainly be easier to have voltage domains, but we already do that on a single chip.

LPHP: All of this has to do with the progression of Moore’s Law, too? Is it critical that we move to the next node?
Clark: We don’t move to the next node because of Moore’s Law. We move to the next node because it gives us more performance or increased density of transistors on a die. As long as you’re not bleeding edge, the most used node is going to be the most economical. We don’t think of it in terms of Moore’s Law. That’s applying a curve to a process that’s happening, rather than the process following the curve.

LPHP: But power is a new wrinkle in this as we move down to 20nm and 14nm, right?
Clark: And other effects.

LPHP: But are we still moving in that direction?
Clark: In our experience, some of the node transitions were shocking. I don’t want to talk about specific nodes, but when we went from node A to B you could run the same tools and use the same corners and it worked, but when you went from B to C you couldn’t just do fast and slow. Now you had to use seven corners and include all these little terms we used to leave off our equations, which means runs take five times as long and use five times as much compute power. It was a lot bigger challenge than many of us thought.
Wang: A lot of times it’s like an algorithmic curve. To get 10% of benefit you have to expend 90% effort. Moore’s Law was never driven by technology. It was driven by economics and human desire for a better life.
Clark: It makes a good story, though.
Wang: To get to 20nm, 14nm and 10nm it’s much more expensive. There are more design rules, more extraction challenges, and more corners. But who knows? Someone may invent new technology. 3D may not be the only answer. It may be 2D devices with 3D kinds of gains in performance and power. For now, people will stay at 28nm as long as possible.
Clark: Yes, until it gets too painful.

LPHP: We’ve heard about some hiccups on the road to 20nm, though.
Wang: There seems to be no rush to go to 20nm. Some companies are waiting and will go to 16nm or 14nm directly.
Venkatesh: But we’ve been hearing this for a long time. At 65nm people said 45nm was a terrible node. People are always apprehensive to go to the next node. They face challenges and move to the next node.
Clark: We also have companies like Intel, which forge the path for us. They have the luxury of huge R&D budgets.
Venkatesh: And it’s always run by competition. If somebody sees the benefit, they do it first. And after a while, everybody follows. I see people moving to lower nodes. Maybe at 10nm with quantum effects, it will stall there. But as long as someone will move, after a while everyone else will, too.
Clark: There are also benefits to sitting back and letting someone else forge the trail. You learn from their mistakes and the foundries catch up. So you’re not forging a trail, but you’re also not lagging.
Trihy: There’s always the risk you’ll wait too long.
Clark: That’s true, and you have to continue developing your libraries and your rule decks. But you’re not creating a product. You’re doing test chips.
Trihy: 20nm is a production node for us now. But what’s unique about 14nm is the finFETs. The bar from getting to 14nm from 20nm just involves changing the transistor. There is a lot of potential gain in power and performance. We may see faster adoption than you think.
Venkatesh: Now you have huge fabs, which are able to deal with these problems. When everyone had their own fab it wasn’t so easy.
Trihy: We work with all the major vendors. The recognition on our part is that we have to lead our customers down the path. But we’re also trying to demonstrate this isn’t as hard as you might think
Wang: But there’s still a physics limit. There’s a cost associated with an advanced node. When you find a next iPhone application, then the cost can be amortized. The applications of semiconductors play a role in the economics of moving to the next node. If you want to make money you have to sell 50 million chips. But with this Internet of Things, everything will be a chip. There may be exponential growth in semiconductor consumption.

LPHP: There’s a lot more mixed-signal content in SoCs these days, and that mixed signal content isn’t always power efficient. What impact does that have?
Venkatesh: You need proper modeling, proper verification and proper handling during place and route. It’s co-managing it along with your digital circuitry.
Clark: We get our analog delivered as a black box with a model representing it. The analog guys’ specialty isn’t Liberty modeling. We end up having to create their Liberty model for them so we can have a good power interaction with the analog blogs. It’s the wrong way to do that. We need the analog guys to tell us what their power does, not for us to tell them what their power does. That’s a problem on a chip with 500 little PHYs.

LPHP: Do the models you’re getting understand power?
Clark: Not the ones we’re getting now. There’s definitely a learning curve.

LPHP: Is that true for digital, as well?
Clark: It depends. Certain things are well known, such as a standard cell library. That’s clearly documented. But is it Liberty or macro models? What format is it in? What are you doing with it? Are you doing AVS or are you just using voltage islands?
Wang: The mixed signal model is a big challenge. But there are two types of models. One is power intent. The other is the power consumption model. A lot of times you have a block on the SoC where you have to deal with dynamic and IR drop analysis. You need to create an abstract model of the power and the power grid and use that for the SoC IR drop analysis. In mixed signal, the traditional way of dealing with this is, analog is a black box to digital and digital is a black box to analog. You can still do it this way, but with increasing complexity it’s getting harder to continue using this approach. On the functional verification side, you can verify your mixed signal standalone, but how do you know when you get to the SoC and you add all the controls and software, most people can’t guarantee 100% coverage of their mixed signal blocks. That’s why SoC verification becomes a problem. How do you create behavior models for your analog/mixed-signal block? You need to solve it with digital software, not SPICE software. It’s becoming more and more challenge. Combine that with power management, and new methodologies and new tools will have to come along.
Clark: If you think about analog models, analog blocks don’t turn on and off. So if you’re talking about voltage islands, you can’t just throw your analog in a voltage island with your digital. You have to control it completely separately. It may have a start up time in microseconds or milliseconds. Digital you just turn on and off. How do you measure the power of that in the system?
Trihy: Another phenomenon we may start to see with interposers and 3D-ICs is that if you design analog functionality on a 65nm or 45nm chip, it can stay there. The digital blocks can go on 20nm or 14nm. We could have a scenario where once you design the analog and proved it, you can re-use it.
Clark: The models won’t have to change. If we could leverage the prior node where the models are proven and have been taped out through several generations of the complicated stuff, and just do standard cells on the newer node, that simplifies things.
Venkatesh: That is a very good application of 3D-IC. The memories and the analog are in different layers. That’s the first thing everyone is trying to do.



Leave a Reply


(Note: This name will be displayed publicly)