Best Practices

Strategies for power control modeling in verification…and what to watch out for.

popularity

By Tom Fitzpatrick
Active power control management for low-power designs has become a hot topic, especially with the latest update to the Unified Power Format standard.

Version 2.1 was approved by IEEE on March 6, 2013. UPF gives the ability to specify power control for different parts of a design, separate from the RTL itself. The advent of low-power design has greatly increased the complexity of functional verification, requiring the verification team not only to verify a design’s functionality, but to do so under various power conditions. A number of EDA tools are available to help with the task of verifying power control functionality.

First, let’s briefly review what’s involved in low-power design. Smaller geometries require additional active power control techniques beyond clock gating. Often the design is partitioned into power domains, whose states are controlled by power controllers. All power domains can be in an On state, in which power to the domain is switched on; some power domains can also be in an Off state, in which power to the domain is switched off. Some domains may have additional states, such as Sleep states, in which power is maintained to some extent to preserve state, but configured to minimize leakage. Changing the state of a domain requires a sequence of control signals (clock gate, isolation, retention, power gate, etc.). Each power state of the overall system corresponds to a particular combination of domain power states. Thus, changing from one system-level power state to another may require several changes in states of individual domains (see Figure 1). From a verification standpoint, it then becomes necessary to verify that the power controller can correctly manipulate the individual power domains and correctly move from one system-level state to another. As the design progresses, several methods can be employed to verify the power control.

Figure 1: System-level power state table.

Figure 1: System-level power state table.

For each domain, there is a set of power supply signals (VSS, VDD, etc.) as well as power switches that are controlled by the outputs of a finite-state machine (FSM). The FSM inputs are either driven as inputs to the block or as bits in a power control register (PCR). In either event, the FSM itself must go through several transitions in order to generate the proper sequence of control signals required to effect each state transition for a given power domain (see Figure 2).

Figure 2: Power control FSM

Figure 2: Power control FSM

One way to organize the control would be to have a system-level FSM that can directly control all of the domains. A more scalable solution is for each IP block to have its own local power control FSM for its domains, usually requiring a system-level FSM to interact with each local FSM. In verifying the FSM, the first step is to use automatic formal analysis to ensure that all states can be reached and to perform other safety-related checks. To ensure correctness of the FSM, assertions can be written about the correct behavior of the control signals (see Figure 3), as well as for the correct sequencing of multiple domain states as the system transitions from one power state to another. These assertions can then be verified formally or in simulation.

Figure 3: Typical protocol for power control signals.

Figure 3: Typical protocol for power control signals.

This analysis is a good first step in verifying that the power control subsystem itself is self-consistent and meets the requirements for each specific domain. However, we still need to verify the overall behavior of the system. The need to verify the overall behavior of the system leads to higher-level tests to ensure that each power domain behaves correctly in the system when given the correct control sequences. So how do we model those control sequences?

In most SoC designs, power states are actually controlled by software running on the processor that writes to the PCR of each block (or set of blocks). If we’re trying to verify correct behavior earlier in the process, say at the block level, we can use a UVM testbench to model the power control for verification purposes. Whether the power in the block is controlled by input signals or via registers, UVM allows us to mimic the system-level behaviors to control the domain’s power state transitions.

If the power signals are driven as block-level inputs, then the power control signals can be thought of as another interface to the block. In such a case, the power control signals can be driven according to the required protocol by a UVM agent that receives power control transactions from a power control sequence. For multiple domains, each can have its own power-control agent to drive its particular protocol. Transaction layering can create a system-level sequence that can execute lower-level power control transactions on each domain to cause system-level power state transitions.

Power control inputs to a block can be thought of as another interface to the block—the power control interface. The UVM testbench will employ the register layer to allow the PCR to be accessed as any other register in the block to accomplish the power state transition. This method is recommended because it allows power control transactions (which are now just a specific type of register access) to be interleaved with other normal data transactions to the block. This gets us much closer to our goal of verifying that the system performs properly as we integrate the power state transitions into the overall behavior of the system.

Note that in either of the two cases described above, the UVM testbench simply controls the power state transitions as it would any other functionality. It is not necessary, and in fact counterproductive, to attempt to manage the power state transitions through the use of the run-time phasing mechanism in UVM. The only run-time phase that is required is the run_phase. This is because it is particularly important to verify that any domain can be powered down or up while other domains are continuing to run normally. Using run-time phasing to model the power states will result either in incomplete verification, or in a hopelessly complex phase schedule and related infrastructure.

Once you are ready to assemble your SoC, you can easily move the UVM sequence-based power control to that level as well. If your processor model isn’t yet available, or if you choose not to include it for initial integration testing, you can simply use a UVM agent as the master on your bus/fabric, and have it issue PCR commands to the different domains as necessary. When you ultimately include your SoC, these sequences will be replaced by actual software instructions executed by the CPU. This transition is best handled if we can have a way of expressing the commands at a higher level of abstraction than simple UVM sequences. Mentor’s tool enabling such abstraction is Questa inFact, which models the power control along with other CPU-driven traffic as a series of concise graphs that are mapped to UVM sequences or to actual software commands for the CPU. Questa inFact’s software-driven verification capability allows the graphs to communicate to ensure maximum coverage in minimum execution time to verify all power states along with all functionality, including the software’s ability to interact with external stimuli provided to the peripherals in the SoC model. Of course, the ultimate test is to run actual application-level software, including drivers and operating system, on the SoC model, and this requires the model to execute in an emulator. Mentor’s Veloce TBX emulator lets you keep your testbench infrastructure while executing your model in the emulator, allowing orders of magnitude faster simulation to model actual software execution, without sacrificing debug or coverage analysis capabilities.

Any of the techniques I’ve described here will help you get a better handle on the verification of your low-power design, but the important thing is to realize that you will have to adjust your thinking a little bit to account for the additional complexity.

—Tom Fitzpatrick is a verification evangelist at Mentor Graphics.