Early and accurate power analysis is critical to enabling successful products for target markets.
By Chetandeep Singh and Ravi Tangirala
Smaller geometry nodes offer cost and performance advantages that encourage their adoption. Yet they present a new set of challenges for IC manufacturers: Though transistors are smaller, they leak more current.
This is an important issue as the demand for high-performance, battery-operated, system-on-chips (SoC) in communication and computing shifts the focus from traditional constraints (such as area, performance, cost, and reliability) to power consumption. Faster gaming and better graphics at the cost of a much shorter battery life does not bode well with end consumers. But this concern is not limited to mobile and tablet markets only. Everybody needs to reduce power. Low power design and optimizing power consumption have become key challenges for all chip designers.
There are primarily two types of power dissipation in an SoC: static and dynamic. As the name suggests, dynamic power dissipation is a result of the dynamic charging and discharging of the capacitive elements in an SoC. On the other hand, static power is the result of power dissipation due to various leakage paths between the source and ground. Power analysis uses two key metrics for measuring the power consumption of an SoC – average power and peak power. The average power over a certain number of functions of the SoC is commonly used to estimate overall power consumption. Peak power analysis can be used for multiple purposes, such as understanding the activity hot spots in the SoC and understanding the thermal and current requirements of the IC.
Dynamic power dissipation is directly proportional to the frequency of operation. Advanced technology nodes use lower gate thresholds to improve performance and lower supply voltages to keep dynamic power low. But this also leads to higher gate leakage, which in turn leads to higher static power dissipation — a major factor for standby power, as expressed below.
The power chart below shows a relative increase in power consumption over the next couple of decades as nodes advance.
Static power is not dependent on the frequency but is proportional to the gate threshold, supply voltage, temperature, and other process geometries. Static power is dissipated in several ways; such as reverse-biased diode leakage from the diffusion layers and the substrate. The largest percentage of static power dissipation is from source-to-drain, sub-threshold leakage currents. This is caused by reducing threshold voltages to switch the gate faster and, thereby, gain higher performance at the cost of higher leakage current.
So, how do we compensate for that? Let’s talk about low power design first.
Power gating remains one of the most effective techniques in reducing leakage power. It involves switching off a part of the design when its functionality is not required. Such areas are called power domains. Retention logic on state elements can be used to bring the switched off domain back to its original state; for example, when a laptop is put in sleep mode and woken up again. There can be different domains in the circuit and each may operate at a different voltage level based on the speeds the domains need to operate at. Level shifters might be required when signals flow between domains working at different voltage levels. And isolation cells can be used to control the output signals from a powered off domain to a deterministic value (generally 0, 1) or a previous state.
These techniques require early verification and verification of functionality related to the power domains. However, typically this has been done only at the gate level, imposing limitations such as slow simulation speeds, difficult debugging, and additional engineering resources to resolve any functional issues. Verification at the RTL can yield more fruitful results, but directly inserting power control logic in RTL has its disadvantages; such as having to re-verify the RTL after the RTL code has been updated with power information, coding style impacts when designers instantiate retention registers or do explicit routing for power control signals, and so on.
Hence there has been a growing need to separate RTL from the power specification intent and a concerted focus on adopting a standard for the power intent that can be accepted by all tools in the design flow at any abstraction level. Written in tcl, the Unified Power Format (UPF) from Accellera addresses this need to capture the low power design intent for use in simulation, synthesis, and routing.
Another reason that power analysis needs to begin early in the design flow is that power, in general, is not impacted by hardware alone; software is an important contributor as well. “Angry Birds” running on a mobile platform eats away more battery than answering phone calls. Hence the need to do system-level power verification and management with real world stimuli — including application and embedded software — is gathering pace. RTL simulation does not offer the level of accuracy and completeness required for verifying low power intent in full chip designs (which sometimes reach billions of gates). This is due to a limitation on the number of cycles a simulation run can churn out in the context of real-life stimuli testing.
For these reasons, emulation, which has traditionally been used for accelerating functional verification, is now being heavily used for low power verification because it has the speed and capacity to verify the power management techniques that have been implemented in the full-chip design.
Multi-domain power management technique
Now, let’s move on to power estimation.
Recently, chip developers realized that with a single core processor clock frequencies could go only to a certain level and still keep power dissipation in check. Then came the time of multi core processors and parallel computing, which offered equivalent performance at a lower frequency than single-thread, single core processors. With lower frequencies came lower power dissipation. And scaling with smaller geometries has facilitated the rise of multi core designs.
But this creates a problem of a different kind. Both the system-level and application software need to be modelled in a multi-threaded way to exploit the target hardware. This is where software-hardware power analysis becomes extremely important to determine how efficiently each core is running.
There have been various advances made in the estimation domain, and for dynamic power calculation, the Switching Activity Interchange Format (SAIF) has become a de-facto standard for calculating average and peak power. The file contains toggle activity for nets in the design that can be captured over a particular test. The file acts as input to the power estimation tools. Because of the cycle/second limitations of software simulation, major efforts have been made by EDA vendors to introduce SAIF generation techniques in emulation, making software/hardware power verification possible in a practical time frame. The heat is also on IP vendors to provide raw power data for industry-standard benchmarks based on long sequences of video frames or transmission packets. IP vendors are also fully engaged in power verification to keep the direction of power dissipation in check in newer generations of IP.
General power estimation flows followed across the industry are shown in figure 3: a gate-level netlist-based power estimation flow (top) and an RTL-based power estimation flow (bottom).
As suggested by Moore’s law, every year nodes will advance and the balancing act between power, performance, and die size will continue to impact success. Early and accurate power analysis will enable successful products for the target market.
Happy power analyzing.
Reference:
1. IBS Report 2013
—Chetandeep Singh is technical marketing engineer at Mentor Graphics; Ravi Tangirala is customer success manager.
Leave a Reply