What Does “Low Power Optimization” Mean To You?

Different power concerns lead to different solutions.

popularity

As I was researching some new low power capabilities, I asked this question of nearly every designer I met: “How important is low power optimization?” It turns out that it’s a pretty useless question because of course it’s important to just about everyone. After all, reducing power improves reliability and reduces design costs. And for chips destined for certain applications, such as mobile or IoT, low power is a primary design constraint.

A more useful question is: “What does low power optimization mean to you?” Then you get some useful answers. And then the EDA community can offer a solution that is relevant to your specific design task.

I found the answer to that question usually falls into one of three buckets, illustrated below.

Engineers have different power concerns depending where their work lives in the overall flow. For example, a system architect has different concerns than a signoff engineer, even if both are optimizing for power. The end application may also affect how and when power optimization is applied. For example, in a mobile application, extending battery life is a primary design goal, and will often be considered earlier in the design cycle than, for example, in an automotive application.

Below, we will look at the three most popular answers to the question in a little more detail.

“I need to create the lowest power design possible.”
When battery life is a selling point of the end-product (such as a mobile phone or portable device), lowest power design tends to be the answer. It also makes for an especially difficult problem. On one hand, the rule of thumb is that 80% of power optimization is locked down by the time RTL is coded. In other words, there is a 5X (or more) difference between a power-optimized architecture and a non-power-optimized architecture.

However, engineers do not have the same gut instinct for the power implications of high-level design decisions as they do for performance and area implications of those decisions. To make things worse, it is difficult at best to estimate the power impact accurately before RTL exists.

The solution to this quandary is to not rely on estimating pre-RTL power. Instead, use synthesis tools to create multiple RTL implementations from a high-level description. Then simulate the RTL and get accurate vector-based estimates of the power. From there you can use that data to quantitatively evaluate different implementations and refine them until you have your “final” RTL for implementation.

Solution:

  1. Create a high-level C/C++/System behavioral model
  2. Use high-level synthesis (HLS) to generate multiple RTL implementations
  3. Quantitatively evaluate for power, performance, area, and routability
  4. Use feedback to refine architectures and iterate
  5. Continue to leverage power optimizations throughout the implementation flow

“I need to minimize power wherever I can.”
For applications in which power may not be the primary concern, it is still important. Designers must ensure that all possible power savings are attained while going through the implementation flow. Power can be optimized at all stages of the design flow, from architecture to signoff.

This highlights a couple of hidden assumptions. First, to be effective, optimizations must feed-forward through the implementation flow; that is, one step (or tool) cannot undo what the previous step did, nor can it have different assumptions about power, timing, or area trade-offs. Secondly, there must be consistent views and power analysis throughout the implementation flow. Otherwise, you can’t evaluate or correlate results from one tool to the next.

Solution:

  1. Leverage power optimization capabilities of every design stage:
    • HLS: clock-gating, memory accesses, FSM structure
    • Logic synthesis: clock-gating, glitching, MBCI, DFT, leakage
    • Place-and-route: power intent- and activity-driven floor planning and placement
    • Multi-mode multi-corner optimization throughout the flow
  2. Use RTL and/or gate-level power analysis to find additional optimization opportunities
  3. Formally prove equivalence of low power optimizations and implementation vs. intent

“I need this chip to work!”
Reliable working silicon is the goal of every hardware engineer, but ensuring reliable silicon falls onto the signoff engineers.  They care very much about power consumption, even if the design itself has no power constraints. IR drop must be analyzed and optimized in the context of overall timing constraints to guarantee stability. Power densities need to be analyzed and optimized to reduce the risk of electromigration.

Any late iterations during power closure will put the overall project schedule at risk, especially if the iteration requires significant changes to the netlist or layout. Even worse, inaccurate system co-analysis of power and timing can cause an even more costly silicon re-spin.

A better approach is to consider power early—even if not optimizing for it—using a consistent power analysis engine throughout. This allows any critical issues to be identified as early as possible.

Solution:

  1. Consistent static and dynamic power analysis throughout implementation flow identifies issues as early as possible
  2. EMIR signoff integrated with timing signoff and implementation ensures fewer iterations, with more predictable closure, including the following considerations:
    • Power rail width optimization
    • Power switch optimization
    • Decoupling capacitor optimization
    • Multi-mode multi-corner (MMMC)
  3. Electrical-thermal co-simulation of chip, package, PCB

Common requirements
Remember that all engineers and all design targets have common requirements:

  • Designers must employ a consistent approach to power analysis throughout the implementation flow. Consistent analysis enables predictable results and signoff. Moreover, the accuracy of the analysis must be high enough to allow causal design decisions at any point in the flow.
  • Power intent must be applied and refined throughout the entire design flow, and the consistency of power intent and its implementation in silicon must be verified.
  • Finally, the flow should support power optimizations at all levels, from high-level synthesis through signoff. The optimizations should feed-forward, meaning that optimizations performed by one tool are not undone or redone by the next.

In the end, using an overall and complete systems design enablement strategy provides a predictable path for low power optimization, no matter how you define it.



Leave a Reply


(Note: This name will be displayed publicly)