What’s Working For Power Verification

The options are many, but the challenges continue to mount when it comes to power verification.

popularity

Getting power verification right — or at least good enough — is the source of frustration for many design teams. Add to this the fact that there is no one right way to accomplish it just compounds the challenge.

Fortunately, there are a number of options that are working to varying degrees, starting with static verification, according to Bernard Murphy, CTO of Atrenta. “Static verification — by which I mean truly static, not formal — is completely reliable for what it covers. There are no simulation dependencies or constraint dependencies, so you can fully verify that power intent, as described in a UPF, completely matches implementation, all the way down to post-layout.”

What static doesn’t cover is correct mission-mode operation of the system, when some IPs are powered off, or in low-voltage/speed mode or transition behaviors, such as switching between these modes and correct recovery from retention registers. “For these scenarios, options rely on simulation-based or formal techniques,” Murphy said. “At a very local level, formal is better because it is more complete, but it cannot handle cross-system behaviors because formal chokes on large designs. You can work around this by black boxing most of the design, but then you have to make (unproven) assumptions about what you can safely ignore.”

Further, estimation at the SoC level is still very hard. Estimation methods need simulation dumps on all nodes at all times, which is very expensive, even on emulators, and drastically limits the time window that can be modeled. Murphy noted that there is work being done on IEEE P2416 to develop a standard for power modeling at the IP level, which may help by reducing the need for modeling all of the circuit at a very detailed level, but this effort is just starting. So estimation can help to check small windows, but no further.

With a correct by construction approach, also referred to as design for verifiable power management, he said that verification still must take place. But if the power management architecture is constrained, there is a better chance of getting to a more complete verification. “The obvious way to do this is to have a central controller for all power management – clock gating, power and voltage control and so on. If power control decisions are made entirely within this network (or from software through this network), it can be feasible to formally verify that the network never switches into unexpected conditions. This works if, and only if, the network to be analyzed is of a size that is manageable in formal. If dependencies also creep down inside the detailed structure of IPs, it again becomes unmanageable.”

It is possible to try to work around this with partition control. “You can have the global controller manage at a global level, but also have local controllers within IP/subsystems. Then you separately prove correctness for each individual IP/subsystem, and for the global level you also use whatever constraints emerge from the local proofs,” he explained.

However, design teams don’t have confidence that such proofs are completely reliable because they are still building in software-controllable bail-out or CYA options to allow users to disable gating or control in the field if real use-cases turn up unexpected corner cases, Murphy observed.

Krishna Balachandran, product marketing director for low power at Cadence, agreed. “Functional verification usually relies on simulation or emulation, and that requires a testbench. That requires the exercising of different cases that the design might be in, and it was already a problem without power. How do you make sure that you have covered all the possible states that a design would be in, and you verified everything you could?”

Simulation is ubiquitous but it is not exhaustive, and neither is emulation, he continued. “Emulation is just much faster than simulation, but both of those are dependent on the humans being able to guide the tests in a proper part of the design space. Power just made it a whole lot worse, and this is not a new problem. Ten years ago versus now, the power definition of a chip has becoming more complex, i.e., the power complexity is going up, which means that there are potentially many states of the design — the logic states and the power states — that could interact with each other in not so obvious ways that would then cause the system to malfunction. If you don’t check for those, and you don’t think about checking for those, you may not have covered them. You’re relying on your stimulus being whatever stimulus generation techniques you’re using to hit that part of the design space to have it covered.”

Therein lies the problem. How does the verification team ensure they have everything covered? Most would agree this is a very tough, and open-ended problem.

To add to this, Balachandran said, some parts of it are being handled with formal verification, which is exhaustive compared to simulation or emulation techniques. “Formal has always been popular in the industry, but formal has its limitations. Formal does not run on a full SoC level. And I don’t mean formal equivalency, which can run on a full chip design and has always been working on a full chip design. That equivalency just compares your gate level rendition of your design versus the RTL. But how do you know your RTL itself is correct? If you run formal at that level, then again you have to write some assertions, you have to write some constraints, and those don’t scale to a chip level. Those are pretty much a block-level technology. Some of those formal techniques are being used for low power, as well.”

Not either/or — both
While it might be easy for some to look at power verification as either correct by construction or estimation, David Hsu, director of product marketing, low power, static and formal verification at Synopsys, stressed that the answer is actually both, and that the these activities go beyond just being connected. “When you’re building the design, clearly it’s to some power budget that is already laid down in stone probably. [As one customer put it,] ‘If you don’t meet the power budget, you don’t have a product.’”

Once at that point, Hsu said the design team has a range of choices in the implementation methodology, which is where the correct by construction concept comes into play. “You have to look at that as what the tool can do to help designers implement the structures and techniques necessary to meet that power budget.” The power format plays into this, but it’s up to the implementation tools to put the proper structures in place. This moves the design into the power verification side of the world, where the intent must be understood, as well. “The verification side of the world, along with the implementation side of the world, has to be much more cognizant and understanding of what the designers are really wanting, and both create those things. And then somehow, if we’re talking about RTL, you need to understand not just the structures that might be synthesized but also what the behavior is going to look like. You don’t want to be doing any of this verification at gate level. We’re talking about enormous designs, and so a lot of this stuff really has to be done at RTL. Then, as you shift the focus onto the estimation side, which is completely complementary — that’s how you’re validating.”

“Verification means that you have done what you need to do according to the spec,” he continued. “And validation means that you’ve actually met your product requirements. Estimation falls under that category. It’s not in any way a replacement for verification, it’s a complement.”

From the estimation perspective, here, a lot of the work is done post-silicon. “But now design teams are looking at the design from a system-level perspective, saying, ‘When I boot up the phone, what is that workload going to do to my power profile under all the modes of operation of the phone or the tablet?’ What they have seen is that if you actually have the silicon and you put it into some sort of system level board (not the actual phone but it simulates the actual device), they’re able to run enough of the workload. This is all great but it’s really late in the process. They want to do it pre-silicon,” Hsu said.

While the technology for power verification can be leveraged in impressive ways, it is impossible to disconnect from the people that are putting the technology to use. “The problems come from the people,” noted Steve Carlson, group director of marketing at Cadence. “The automation that we’ve put into place now is pretty good, but the specification part is still left to people. You have to create your CPF/UPF power format files. People may be working at a subsystem level and have to integrate at the SoC level. And you can end up with 10,000 lines of pretty complicated stuff that you go and verify. You can verify it, but there are a lot of mistakes that get made, and it’s the human element that causes it. That’s the biggest failing I’ve seen in the overall flow today.”

What’s not working
While much is working today for power verification, one thing connected to that that is not working is the use of spreadsheets, noted Paul Traynar, software architect at Ansys/Apache. “Those guys who are designing and do have power as a priority typically it starts off with things like spreadsheets. They’ll have a power budget and then, either based on previous experience with their design, perhaps they are doing some kind of scaling or they are incorporating that into a more complex design so they’ll perhaps try to get some idea about the average power by just creating spreadsheets. The spreadsheets include typical cells that they might be using or that they think the synthesis tool might use to implement what they’ve got that may have some larger macros, for which they can extract power from a databook. People probably tend to start in that sort of area. They’ll try and estimate based upon spreadsheet stuff.”

“Very quickly they will find that if power is an important part of the whole signoff for the design, then they really do need some sort of power analysis tool because with spreadsheets you can’t incorporate things like net switching power, which is a significant amount of power for a design. If you’re at the RTL, you can’t estimate in spreadsheets very accurately clock trees, for example, because there are no clock trees,. You’ve got no clock buffers, you’ve got no idea what the clock tree is going to look like, so you can’t really estimate it other than just kind of guess. On top of that, if you’re trying to do any kind of power optimization with clock gating, there’s no chance to get any decent handle on how much power you will save by doing clock gating,” he added.

Whichever power verification methodology is adopted, a number of technologies will be used that leverage an ever-increasing amount of design data from throughout the design flow.



2 comments

Anonymous M/S verification eng says:

Most teams are just beginning to understand the impact of analog M/S on power management verification for modern SoCs. Dealing with many levels of power management on the digital side is already a tough problem. Introducing analog to achieve voltage scaling and things like DVFS makes problem even tougher to deal with. Power aware mixed-signal verification is the toughest challenge out there with very little support from the EDA industry so far. Folks who get it right rely mostly on leveraging homegrown methodologies to get it right – it’s still a “black-art”…

Ann Steffora Mutschler says:

Thanks for your comment. I agree there is still much work to be done, and I’m interested to see how EDA handles power aware mixed-signal verification.

Leave a Reply


(Note: This name will be displayed publicly)