Test Becomes Power-Aware

Some of the same verification techniques apply, but it’s more complicated than that and errors can creep in where there are gaps.


Power-aware test plans are changing, becoming far more extensive than the minimalist plans that were common just a few years ago.

In the past would determine if they could power their design up, power it down, then they’d declare it done. “Sometimes they would find they could power it up and power it down once, but they couldn’t power it up a second time because they’d forgotten to retain some state that was essential or because the power-up signals were coming from domains that were powered down and needed to be powered up,” said Erich Marschner, verification architect at Mentor Graphics. “The earliest kind of testing we saw was very, very simple — way too limited to really be demonstrating anything.”

That has improved, but even today it’s not uncommon to confine power-aware test to a small fraction of the overall verification environment. “When you think about it, all of the tests have to run in the power-aware mode ultimately because the real system is going to be power-aware so it’s a little strange for people to be running 5% to 10% of their test cases or have 5% to 10% as many power aware tests as there are regular, functional tests,” Marschner said.

Robert Ruiz, senior product marketing manager for test at Synopsys agreed: “At one time, the way power aware test plans were created, the engineering team would generate a standard set of patterns, and they’d run that. If it didn’t work, they’d debug it and figure out why — maybe there was some IR drop — and then redirect the ATPG with whatever means they had. It wasn’t really planning, it was just ad hoc reaction.”

Step forward to now, where a big bulk of the methodology in use is to consider what the functional spec says before there is any design intent, including whether there’s a power budget and a max power operating limit that is designed. “One approach has been to say, ‘This is what the part will work at,’” Ruiz said. “In other cases what’s happened is using some rail analysis to make a determination for developing a less costly test program — we know there are going to be power implications so we’re going to design our power grids so they can handle the extra power beyond the functional budget.”

Fortunately, some engineering teams have begun realizing that this is a risk area, Marschner observed. “They are potentially not catching all of the problems that they might catch by just running a few tests, so they’ve started thinking about running power-aware simulation for the entire regression suite without really thinking about whether that’s going to help or not. One of the questions is, ‘Did your normal, functional test even catch power errors?’ Sure, it might but if you don’t design a test suite that is focused on the kind of issues that power management can create, you’re not likely to catch all of the errors. You might catch more but probably not all. Even the idea of running the entire functional regression suite in a power aware mode, while it is an improvement, isn’t really the best situation.”

Another area where there is room for improvement is conceiving of the classes of problems that can be caused by errors in power management and modeling those classes of errors, Marschner asserted. Also, it is important to make sure the right monitoring mechanism is in place to detect those errors as well as ensuring that the right stimulus is being used to trigger those errors in the event that they can occur. “It’s a classic coverage-driven verification problem, but now for power-aware.”

Developing power-aware plans
Specifically, power-aware test plans come down to extending the general concept of a test plan for coverage-driven verification through power-aware verification. The test plan should contain all of the test points that are essential to demonstrate power management is working in all the various different scenarios and corner cases that can occur. The monitoring mechanisms must be in place and the coverage data collection in place to collect evidence that things are working correctly. The right checks must be in place to detect errors if things are not working correctly and the right stimulus must be used to get into all of those situations and therefore get the coverage needed. It’s a collection of all of those things, Marschner noted.

Krishna Balachandran, product marketing director for low power at Cadence, added that power-aware test plans require careful consideration of the power architecture for the design under test, which is no different from the challenge of creating test plans for non power-related functional verification.

“Test plans are all about achieving high coverage,” he said. “Coverage of all power modes in which the design might find itself in is important to get a good sense of confidence that the design is functional under all power modes. Additionally, power transitions must be accounted for and included in power-aware test plans. There are efforts to create and reuse test plans by extending UVM (Universal Verification Methodology) to low power. The idea is to use the power intent to generate appropriate tests that adequately cover power states and transitions that the design has been architected to go through. However, UVM low power goes beyond generation of the necessary power-aware tests and includes monitors and checkers that can also help with collecting and viewing coverage information to answer the critical question of how well the test plan has been executed.”

The different aspects of power-aware test are addressed by different EDA tools/technologies, Guillaume Boillet, technical marketing manager at Atrenta said. These can be broken structural components, such as static checkers, and functional tools, such as dynamic simulators and formal checkers.

“In the ‘old days,’ when complexity was still manageable, verification engineers used a combination of static checks to ensure all low-power elements were in place, but this had some gaps, in particular due to lack of a complete power intent standard,” Boillet said. “Simulation was also used to try to cover every low-power scenario through simulation. Now that static checkers and power intent standards are more mature, design teams implement vastly more complicated power partitioning schemes that are impossible to fully cover by simulation.”

To be sure, IC test plans are becoming more complex with the use of advanced low-power design techniques. ICs are typically designed to reliably operate during the functional mode and all the tradeoffs on power, performance and area are done with the functional requirements in mind. Test plans however, have the requirement to provide the best coverage for test and minimize the tester time, noted Arvind Shanmugavel, senior director of applications engineering at ANSYS. “With the peak test mode power now exceeding 20 to 30 times the nominal functional power, power requirement during test mode has become one of the critical aspects of design for today’s low power ICs.”

He suggested that test plans typically consider the following aspects during the test architecture planning:
• The low power techniques used in the design such as power gating, clock gating and voltage islands.
• Peak power consumption during test.
• Power noise coupling due to high di/dt.

In addition, Shanmugavel said, power-aware test architectures typically use scan chain compression logic and scan chain partitioning to minimize toggle activity; scan chain ordering and advanced ATPG algorithms for minimizing power; and clock tree modification to minimize di/dt during implementation to minimize the impact of power.

So what’s the best starting point? A power-aware test plan should begin at the design phase, which is where test and design become more and more integrated, Ruiz concluded.