Experts At The Table: The Trouble With Low-Power Verification

Last of three parts: Determining power intent; keeping up with complexity, but sweating along the way; tool versioning problems; software; merging function and power.

popularity

By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss low-power verification with Leah Clark, associate technical director at Broadcom; Erich Marschner, product marketing manager at Mentor Graphics; Cary Chin, director of marketing for low-power solutions at Synopsys; and Venki Venkatesh, senior director of engineering at Atrenta. What follows are excerpts of that conversation.

LPHP: How do you determine the power intent in an SoC?
Marschner: You can apply different power intent for different uses of that system.
Clark: But you could encode in RTL what things are related to each other.
Marschner: You can do that today in UPF 2.0 in the RTL. It’s the stuff that isn’t still changing that you want to encode in the RTL. Once it becomes stable then it’s reasonable for you to put in the RTL. If it’s not stable, then don’t put it there.

LPHP: Is it getting easier to verify low power or harder?
Chin: Every design is getting more complicated. We’re raising the bar just as fast, if not faster, than what we can deal with. It’s a relative call about whether we’re catching up or not.
Marschner: If you go back in time, we couldn’t do any of this at all. It seems to be getting easier faster than it’s getting harder.
Venkatesh: It’s gotten a little easier because we now have good ways to specify power intent. But it’s also getting more complicated because at lower-nanometer nodes new kinds of tests are coming in. And with very large designs, there are a huge number of power domains. The methodology is still evolving, and people are cobbling together their own methodology—some optimally and some not so optimally. They’re really struggling with it. The other thing about power verification is that it’s a ‘must have.’ You cannot verify partially. You can optimize partially. But the verification has to be complete before it gets out. It’s a big area of pain for everyone.
Marschner: [Mentor verification fellow] Harry Foster presented some statistics on power verification and noted that 10% of the respondents to a survey said they didn’t verify power at all. So maybe you can sometimes get by without verifying power, but it’s really risky.
Venkatesh: It’s really risky because it can cause chip failure. Sometimes you pay a heavy price.
Chin: But if you look at it at a high level and compare the iPhone 5 with the first iPhone, they basically have the same level of battery technology inside. That has changed very little. But if you look at the amount of stuff that’s inside these devices, we are keeping up with Moore’s Law. We’ve done amazing things over the last five years. At the consumer level you can see it and touch it. It’s amazing what we can do today versus five years ago.
Venkatesh: We are able to do a lot of things, but people are really sweating it out.
Chin: And they have been for five years. Every generation has the same problem, and it will continue to be until we are really done with power. But we won’t be done for a long time. Even today you can think of many more things that we could do.
Clark: We’re designing a 700 million-transistor chip. It’s huge. We have stuff coming from every time zone and every functional group. So it’s great that we can do all of these things, but how do we actually get people to do that?
Venkatesh: As a company you have to institute methodologies and processes. You don’t use an IP unless it’s properly packaged with all the views.
Clark: When we started on our current technology we specified here is what all the IP needs to look like and here are all the views. That was eight months ago. We’re stuck at that place until we move to the next technology node and repackage all of our IP.
Clark: This is all great stuff, but this is the future. Right now we have challenges for which we have answers, but we can’t implement the answers yet.
Marschner: This is a problem for EDA vendors, too. We have added all these new capabilities into our new release, but we have customers still using versions of our tools that are two or three years old. They can’t move because of training issues or they’re employed in a project. This is a natural part of EDA. We move so fast that we’re dragged back by our own shadow.
Venkatesh: Not all the tools are at the same level. That’s a big challenge for customers.
Clark: Even within the same EDA company, there is the same problem as with your customers. You’re spread across two years of technology.
Marschner: The alternative is to wait.
Clark: No, we don’t want anyone to wait, but we also don’t want the challenges we’re facing today to get lost in the charge toward what’s needed tomorrow. We need solutions that will help us bring things together today. Global power from an architectural perspective is where we want to go, but we’re not there now.
Marschner: What we do today comes out in release later on. The problem is that until enough things are done today you can’t get to the next things. There are things out today that you can use. The question is whether there are enough things to provide critical mass.
Clark: But waiting for an EDA company to reach critical mass is also where you get all your homegrown methodologies, which creates divergence.
Marschner: Education is one thing we can do today to help people build a flow that works.

LPHP: How does software enter into this picture?
Venkatesh: Software is a power hog. Personally I’m not aware of any tools, but companies are looking into this. When they create applications, they try to minimize use of those power-hungry instructions.
Chin: You can look at software instructions and what comes out of compilers and try to determine and predict. But even on the software front we’d like to have a higher-level architectural view of what’s going on. That’s one of the reasons we’re heading for a merging of function and power. In some cases it’s power even more than function because it’s a global resource. It’s not like timing where if your chip fails a task it impacts one little piece. There is a global energy supply you’re using, so even if something goes off in a corner of a chip it may impact the entire chip. So the software guys have to get to that, as well. They have to create a model that’s detailed enough to tell them what’s going on. I can’t even tell what’s happening on my laptop these days because the operating system is so complex and there are so many things happening at the same time. The whole mobile device is even more complicated—so complicated that it’s hard to even tell what instructions are executing. They’re being executed in parallel and out of order.
Venkatesh: The very first step is power profiling, which is done when the application is running at the macro level and then at the micro level. Then through analysis followed by test. That’s what’s done now.
Marschner: At the SoC level there’s more concern about resources, and the operating system turns on and off different things. That requires a much higher level of integration before you can even look at what’s happening and determine power consumption and what’s getting locked up. That tends to lead fairly quickly to emulation and hardware-software verification. You need to look at the interactions between components.
Clark: We do power analysis of the layer 0, 1, 2 type of software. We don’t do software. Our customers do software.

LPHP: If all we’re gaining is two hours of battery life, is all of this time and money spent moving forward worth it?
Marschner: Absolutely, because all engineering is incremental. You can’t make huge leaps all by yourself.
Clark: But you can change your priorities. If you need to tape out on this day and you have 20 test cases and 18 are function and 2 are power, you’re going to tape out before you do the power unless they’re critical. If your battery is 4 hours or 4.5 hours, that doesn’t mean the chip won’t work. It may affect your ability to sell the chip or your labeling of the package.
Marschner: It could affect your market share, too.
Clark: Yes, but our strategy is that we don’t tape out a chip only once. We tape it out and sample it. So it does affect your overall product strategy.
Marschner: That whole idea of taping out more than once, in itself, is an incremental strategy. We do think in increments.
Clark: Within a given product, we will tape out something fast and dirty to get something into our customers’ hands. Then they can develop software, which we can analyze for switching activity. And looking at features that we may or may not keep, then we can go finish the things we took shortcuts on, such as making sure it operates over the full operating range or that all the level shifters are in. We do prioritize these things.
Chin: That doesn’t imply power is less important. You can say the same thing about timing. The key is that you need to be able to continue that product development path.
Clark: There is a phasing in of these different checks.
Venkatesh: These calls of how much power you want to save are done much earlier in the process. At the architectural level you may decide that you need massive power savings, so you add in more power domains and DVFS. Then you go forward. That call is made very early.
Marschner: The flip side is we’re all averse to risk. You try to make substantial changes up front, but you don’t want to go too far.
Venkatesh: I agree. But the point is that power management has to be done way ahead.
Clark: Yes and no. You can put all the hooks in there, but you also have a signal that comes in there and disables it if you have to. That does help you make these incremental steps. When it works, great. When it doesn’t, you can do it better the next time.
Venkatesh: Then your verification methodology gets more complicated.
Clark: Yes, and the problem is that it’s really easy to say, ‘It’s too difficult, so turn it off.’



Leave a Reply


(Note: This name will be displayed publicly)