How Software-Driven Tests Support Concurrent Power/Performance Analysis

More complex power management techniques are impacting system performance in ways that are complicated to predict.


There’s always been an intimate relationship between performance and power—and it’s one that is acutely affected by architecture. Architectural innovation can yield orders of magnitude improvements in performance/power metrics. For example, we’ve seen a growing popularity in multi-core and heterogeneous core systems with purpose-specific hardware accelerators. These configurations are one way in which the design community has innovated in order to meet demands for both high performance and energy efficiency.

There is a trade-off involved and a delicate balance required to achieve the right performance/power mix. Designers need to understand how every incremental refinement and optimization will impact that mix. Making the wrong choice can be disastrous, as recent high-profile product failures have shown. That’s why concurrent power/performance analysis is critical for today’s SoC designs.

Measuring performance parameters

When it comes to performance, design teams are most interested in throughput, latency, and clock frequency. Designers probably have the best understanding of clock frequency, tapping into static timing tools to measure this parameter. Accuracy boils down to the level of detail derived from the final physical implementation. At advanced nodes, however, analysis methods take on more importance. Here, the methods could involve path-based tracing with statistical variation and full waveform propagation.

Measuring throughput and latency depends very much on the architecture. To achieve the best accuracy, designers need to carefully develop the right scenarios to examine. Using software virtual prototypes isn’t always the answer here. Many have found that, particularly for complex SoCs, emulation is the technique delivering the speed as well as accuracy needed to properly examine a variety of scenarios. To develop these scenarios, it’s often most effective to use known traces through execution of system software. But, at the same time, it’s not always practical to have the software stack up and running as a design is in process. Software-driven tests are a possibility here, as these tests can cover a broad spectrum of the execution trace space. Cadence’s Perspec System Verifier is an example of a tool that can automate system-level, coverage-driven test development.

Understanding power

With regard to power consumption, most SoC designers focus on managing static (leakage) and dynamic (switching) power. Multi-threshold libraries, back-bias, and voltage domains provide three techniques to manage leakage power. Since leakage has an exponential dependency on operating temperature, which can vary quite a bit over time and location on the die, many designers are incorporating thermal analysis into their timing and power signoff processes.

Dynamic power can be trickier to understand, mainly because of the switching aspect. The context in which the final power data will be used needs to consider the switching scenario, its purpose, and its format. See Table 1 for guidelines that can help you map power type, purpose, format, and tools.


The role of functional verification

Functional verification has emerged as a foundation for many other types of analyses. After all, incorporating power/performance analysis as an independent effort in an SoC development schedule can be cost-prohibitive. By instead investing in functional verification, design teams are able to amortize the cost of verification as well as bring together their device configuration, test setup, and reporting practices. Figure 1 depicts Cadence’s low-power design flow, which demonstrates the advantages of integrating flows for design, implementation, verification, and analysis.


Figure 1: Cadence Low-Power Design Flow

SoC designers are utilizing more complex power management techniques in their designs. These techniques, in turn, are impacting system performance in ways that are more complicated to predict. As a result, design teams are performing concurrent power/performance analysis via extensions in their standard functional verification/regression process. At the SoC level, software-driven hardware testing has emerged as a very effective way to evaluate performance and power under real user scenarios.