Experts At The Table: Low-Power Verification

First of three parts: Functional vs. physical verification; power boosts design complexity; explosion in power domains; creating a good test plan; how to know when it’s done; the impact of IP.

popularity

Low-Power Engineering sat down to discuss the problems of identifying and verifying power issues with Barry Pangrle, solutions architect for low-power design at Mentor Graphics; Krishna Balachandran, director of low-power verification marketing at Synopsys; Kalar Rajendiran, senior director of marketing at eSilicon; Will Ruby, senior director of technical sales and support at Apache Design; and Lauro Rizzatti, general manager of EVE-USA. What follows are excerpts of that conversation.

LPE: What’s the big challenge with verifying power in an SoC?
Ruby: Power has a couple of different components. One is how the low-power techniques impact functionality. If you talk about things like power gating, power supply shutoff, multiple supply voltages and so on, this is where you need to understand certain rules of turning on and off power supplies. You need to be able to create retention cells, to be able to retain state, and to retain functionality. That’s one major aspect. The other side is that you have to look at the power consumption itself. How do you verify that you are on target, if you have a target, and that you are not exceeding a specification? And how do you ensure the design has efficiency built in.
Rajendiran: This is all about is trying to verify what your intentions were that you stated in the beginning and making sure that has been implemented—and when the chip comes out, making sure it is functioning that way. In the old days we simply meant functional and timing verification. Now, just on the functional side, it has become so complex that just getting it out the door is a challenge. It’s the same with software. No one thinks about verifying it all. That’s the practical problem. The person who is verifying the power states doesn’t have the time to put in the right hooks. We have the Unified Power Format to help, but we still don’t have standardization as to how you verify the states. Tools rely a lot of naming conventions, but even though there are fewer companies there is still not compatibility in reading all of those things. Tools are always playing catch-up, too. The ideal solution will be a combination of great tools and planning. In addition, you can have the best tools, but if you put them in the wrong hands you don’t get results.
Pangrle: There’s a functional part and a physical part of verification. A lot of what is going on in the industry right now, especially with the power formats and the convergence around UPF and the 1801 IEEE working group, has been to keep the power intent separate from what has been the standard part of functional verification. It’s allowing people to use their standard flow, take it to RTL, and still be able to design RTL blocks that can be used in different design scenarios with different power management. You don’t have to hard-code isolation, level-shifting, retention registers into those blocks. You can still design your block your same way, and if in one design you’re going to power down that block that’s okay because the intent information is in a separate format and you can bring that in. From that standpoint, there has been good collaboration between EDA companies and their customers. From the standpoint of putting it all together and being able to support the tools, one of the things we’re seeing is that as EDA companies work with designers there are times where something is a little different and different vendors have created support. That’s where it gets tougher to move designs from one company’s set of tools to another. It also brings up some new questions. From the physical side, if you’re powering up and down blocks it has a real impact on your power grid and whether it’s going to function. Just because logically it looks as if it should work, that doesn’t mean when you get your chip back from the foundry you’re not going to run into other issues. And in terms of the complexity of testing, you can do the standard ATPG, but when you go through the dynamics of running different voltages and frequencies and bringing things up and taking them down, to what extent are you actually going to test that?
Balachandran: Verification is complex enough without low power, stretching the resources from both a verification productivity standpoint as well as IP cost. When you add low power into the mix, it makes things much worse. The complexity of low-power designs has been going up slowly but steadily. Some companies that are on the cutting edge, particularly in the mobile market, started adopting low-power designs about five or six years ago. They were the frontrunners of the whole low-power wave. They put the initial pressure on low-power verification, because now you have to start thinking about verification differently. You have to start thinking about voltages, multiple supplies, and whether things going to work in all those conditions. Clock gating is the most basic technique, and almost every company you talk with has been doing clock gating. Now that has expanded into more sophisticated techniques to curb the power, and with that comes the burden to verify properly. All it takes is one unverified state or transition or sequence for the design to completely lock up and not function at all.’

LPE: How bad is this problem?
Balachandran: It’s becoming more widespread. There are government regulations and green initiatives. Everything is going green. There are demands on specifications, and even on power for devices connected to the wall. That requires chipmakers to make their designs much more power-efficient. Customers typically start with four or five power domains. Some of that verification can be done with static techniques or with some rudimentary simulation. But it’s becoming more complex, and this complexity is increasing for the mainstream market, not just the mobile market. The number of power domains is exploding. We’ve seen designs with 50 power domains, which is potentially 250 power states. It’s pretty much impossible to verify all of them. So you need to come up with a really good test plan. When people are confronted with low-power designs the first time, they have no clue about how to write a testbench for low power. Often they need a lot of methodology help, in addition to having the right tools in place, to figure out what they’re going to do, how they’re going to go about doing it, and how they know when they’re done. Then, what is the measure of confidence they have at the end to figure out if they’re really done?
Rizzatti: From the perspective of emulation, this technology has been used for functional verification. Ten years ago, power management was essentially a gated clock. You turned off and on some part of the chip and saved energy there. Around 2001-2002, designs with 10 or 20 of these were called derived clocks. Today we have customers with 100,000 derived clocks. There’s an explosion. But that’s only one problem. Over the past five years, and especially in the past one or two, there are all these new techniques for turning on and off voltages. We had one customer with well more than 100 power domains. The whole industry is changing. Power management is a nightmare, and it makes SoC verification orders of magnitude more difficult.

LPE: With a disaggregated supply chain and more IP re-use, does it make it more difficult to verify the design? Not all of the IP is fully characterized for power.
Balachandran: UPF, or IEEE 1801, and CPF have ways to model the power intent of IP. The issue isn’t so much the ability to specify the power intent of IP. Talking to all the major customers, everybody is either integrating internal IP or using third-party IP. Some of the IP blocks have their own power management, too. It has to be communicated to the integrator of that SoC as to what are the legal ways to integrate the IP into the SoC. That information has to be passed along. The power format is not the right way to pass that information. So the industry has to work out a way—together—to solve this problem. The IP companies, the EDA companies and the whole ecosystem has to work on this to facilitate communicating the right behavior that IP can be integrated from a power perspective, and to tell the IP integrator when they are doing something wrong. If IP is coming from a third party and you have no idea what is going on with that IP in terms of its inner functionality or how the power is implemented and what ways you can put it together on the block, then you can shoot yourself in the foot pretty quickly. This is a problem that needs to be solved. One potential solution is to create assertions for an IP block. The IP developer doesn’t know how IP is going to be used, but they do know what is legal or not. They can create assertions for that and ship it with the IP. Then, when the integrator puts it into the SoC and runs the verification, they are able to figure out if they’ve done it properly or not. If it’s not, then they can have a dialog with the IP company. It’s a way of communicating the data sheet of the IP to the next-level integrator. This is one way of solving the problem. It requires close collaboration between IP partners and EDA and design services companies.
Rajendiran: More times than not, people don’t do that. There are many ways that tools can help, too. If some expert designed the IP block, he can provide some input and then a tool can insert assertions back into the RTL. Ideally you want to keep it separate as a companion file. That’s one approach. But the problem is more complex than that when it comes to low-power verification. IP is one issue. There is physical IP where you can’t do much because it’s already hard coded. There’s also soft IP. Each of the classes has its own challenges. With the soft IP, a lot of activity only happens at the gate level. Depending on how the RTL gets synthesized and mapped, you can have a perfectly functioning solution when you use a particular library in a particular foundry, and the same thing may not work somewhere else. You need deep knowledge about this stuff. You need collaboration of tools, the integrator and the IP developer to make sure you at least get the product out to market on time.
Ruby: There is another dimension of IP—the power intent side, which is the functional verification aspect. That’s absolutely essential to ensure the functionality. Time and time again, what I’ve come across is the need for some way to describe the power consumption behavior of IP, as well. It could be technology dependent or technology independent. It could be models that describe assumptions as a function of clock frequency or data rates. From my customer perspective, this is also becoming essential in the power verification area because they’re not just worried about functional intent. They’re also worried about hitting their power specs. They need models for the IP coming in. If they plug IP into their design and they run their clock frequency at a certain rate, what power consumption can they expect? That’s another very important element to this verification challenge.



Leave a Reply


(Note: This name will be displayed publicly)