Experts At The Table: Low-Power Verification

Last of three parts: What standards are needed; writing testbenches for power; making software more energy-efficient; breaking down silos between design and verification; restructuring design teams and creating new methodologies.

popularity

By Ed Sperling
Low-Power Engineering sat down to discuss the problems of identifying and verifying power issues with Barry Pangrle, solutions architect for low-power design at Mentor Graphics; Krishna Balachandran, director of low-power verification marketing at Synopsys; Kalar Rajendiran, senior director of marketing at eSilicon; Will Ruby, senior director of technical sales and support at Apache Design; and Lauro Rizzatti, general manager of EVE-USA. What follows are excerpts of that conversation.

LPE: What standards do we need that we don’t have?
Ruby: My vote would be for a power-consumption level. That’s absolutely critical. Gates have their own built-in standard cell libraries and power models. You abstract to RTL, and we have found a way of using the same standard-cell library. But once you go above RTL there is nothing in terms of power consumption models.
Pangrle: With UPF, SAIF (Switching Activity Interchange Format) had been included. The standard is really set up for power intent. But when you look at the back, there’s this ‘SAIF’ information, as well. There has been a little bit of talk on that committee about what we should be doing with this. If you’re going to have any type of standard to specify the energy coming out of a given part, that’s going to be tied to some activity. You have to have some way of specifying that, as well. There is a potential for tying that together in the future, whether it’s there or somewhere else.
Rajendiran: By definition, a standard means multiple parties need to talk. Models have come out of that with standards like UPF. You can specify the power intent, but that’s a very high level. Where it gets implemented, when you’re splitting microwatts and nanowatts, it’s necessary but not sufficient. You need a standard, but you need good tools and someone who can combine the high-level intent with all the nitty gritty low-level details of the variants and power and know how these things will be put together and verified. I don’t think we’ll ever have a situation where you push a button and determine whether function is correct and timing is correct and power is correct. If we do, that means we have stagnated. We will always have something pushing us.

LPE: But we’re still at the point where we don’t even know how to write testbenches for power, right?
Balachandran: Yes. When a customer starts with their first low-power design, they’re struggling to figure out what to do to exercise all the low-power functionality in the design. If the customer is more experienced, they already have a way to do that. They’re learning how to do a low-power design from a test perspective. Creating that first test plan and testbench for a low-power design is not a trivial task. It’s a huge methodology change. They’ve got to invest resources in terms of learning what it takes to do it and allocating resources to do it. That challenge is still not solved.
Ruby: If you look at it from a hardware perspective you’re absolutely right. But if we can somehow leverage the applications running on this hardware and transform them to testbenches, that would be another element of this solution.
Rizzatti: But is it the embedded software doing that? The application is the embedded software. You don’t have to invent anything.
Pangrle: It’s been a challenging transition for the hardware guys to deal with all the complexity brought about by power. If you look at the software, it’s almost always been purely about functionality. Either the software works or it doesn’t. Now you’re asking the software guys to look at the way they write their code because it can have a big impact on the performance of a device.
Rajendiran: A developer writes a piece of code and even provides a way to test it. The developer knows the guts of the software, so he comes up with a test plan. We used to call that white-box testing. Then we also used to pick up the most junior guy and tell him to do a black-box test. The reason we did that is the developer knows a certain path to exercise. If you give it to some random guy he’ll exercise it in ways the developer never thought about. It’s probably impossible for the developer to write a test plan for power, though. You probably need a combination of test plans. The better companies have people writing power verification plans and the developer writes the functionality part.
Balachandran: I’ve seen the same thing. The designer and the verification engineer are still working independently. The design engineer is not the verification expert and isn’t the architect of the low-power system. The verification engineer comes from a software background and has expertise in writing a test environment without much of the low-power hardware knowledge. The low-power architect has very little idea about how to write the software and how to write the testbenches to test his design. It’s almost as if you need a third person who knows enough about both of them to be an intermediary in order to close the gap. That’s the challenge companies are facing. You either have a verification expert, who is not a low-power expert, or you have a low-power design expert who doesn’t have enough knowledge of the verification concepts. Large companies are investing a lot in methodology. They have low-power methodology teams in place, and they’re tasked with coming up with the proper flow to get this working. Some companies are expending as many as one or two years to come up with the right methodology for their environment before they put a flow in place for their design teams.
Ruby: What if you can say, ‘Look, I’m designing this chip for a cell phone and I can verify it runs this operating system and 500 applications?’ If I forget about testbenches and power and I can just run the applications on my hardware, to me that would be golden. So can you run an application to analyze your power grid? That’s a big question.
Balachandran: That’s a great idea, but the apps don’t come out before the product is launched in many cases. How do you test for that before you get the product out in silicon and ship it? That’s where the challenge is.
Ruby: You test it the same way as our customers do. You test it with what you’ve got at the time.
Balachandran: That requires a very big effort on the part of the software developers for an application and the system providers working ahead well in advance. Only a couple companies in the world have that kind of clout.
Ruby: Look at the consolidation in the semiconductor industry. That’s exactly where we’re going.
Balachandran: But even those companies that are the system builders are sourcing their ICs from different companies. This has to go down the chain where the IC companies have those requirements and specs for apps in mind from the beginning, because if you talk to the typical IC maker they don’t care about apps as much.
Ruby: They should.
Balachandran: Maybe they should, but it’s a big change in the entire ecosystem. It’s possible, but not easy to accomplish.

LPE: What happens with stacked die? Does that change the verification process because there are new issues with power in stacks?
Pangrle: That’s more on the physical side. On the functional side it’s just a bigger system. On the physical side, the interaction of heat dissipation comes into play. If you’ve got a single die you can stick a heat sink on top or do some active cooling. If you start stacking something in the middle it doesn’t have a direct path out to that heat spreader anymore.
Rajendiran: When you think about flip chip, it took 30 years before it came into common use. The reason is that we have vectors pulling us in different directions. On one hand the mobile and consumer space are pushing us into the cheapest and lowest power. That’s a completely different direction than some of the biggest companies are taking. The biggest benefit people are looking to gain with stacked die isn’t low power. It’s that the cost is too high to move to 28nm or 22nm. Already verified chips at 180nm are perfectly fine because that portion doesn’t need to run that fast. If you combine 180 and 130 and 65, and maybe do the logic that ties them together in 90nm, time to market is the key.

LPE: But when you verify that, three good chips don’t necessarily make one good stacked die, right?
Rajendiran: No, and verification will continue to be a challenge. We need standards and models, and the verification will only be as good as the model. The overall full-chip model has to make some assumptions about timing and power, and that has to come from the IP supplier and manufacturers.



Leave a Reply