Extending Power Analysis To The Emulation of Complex SoCs

EDA vendors need to provide designers with an option to trade off power estimation granularity with processing time during different phases of development.

popularity

Using hardware emulation to estimate SoC power consumption delivers significant value. Emulators are capable of long runs on large designs, making it practical to emulate an RTOS boot sequence or graphics processing of multiple frames. Estimating power consumption of these advanced functions executing across the complete SoC provides valuable insight into the chip’s power draw and its impact on mobile device battery life.

However, the very aspect that makes this analysis valuable creates a challenge for power analysis tools— that is, the large vector sets and large gate counts these tools must be able to handle. While a large simulation run may involve a few million vectors and 10M gates, emulators process a million vectors per second on designs exceeding 100M gates. Power estimation tools developed for simulation-size datasets struggle with the massive output of emulation runs. It’s not uncommon to see two day power estimation runs for a two second emulation of a 50M gate design.

Runtime heavily depends on the temporal resolution the user requires. Average power for the end-to-end emulation run using a SAIF file takes little time to calculate, as average power requires only the total toggle count for each net. SAIF file size is independent of emulation runtime and varies only with the number of nets in the design. A temporal plot of power consumption requires a time-stamp for every transition on every net in the design. This results in a massive amount of data for large designs running millions of clocks. A typical 50M gate design emulating for one second at one MHz produces a 25 Gigabyte FSDB file which can take 12 hours for a power estimation tool to analyze. Now extrapolate this to the one billion clocks consumed during a Linux boot and power analysis would take over one year to complete, assuming it actually finished. Now when is your tapeout deadline?

Clearly a different approach is required. Yes, the tools must be more efficient at processing large datasets, but expecting a 500X improvement when power analysis tools have already been optimized for performance is unrealistic. Removing the rather verbose FSDB as the format for transfer of temporal switching data between emulation and power analysis is an obvious target. FSDB is another technology developed for simulation-size vector-sets that is being pushed past its practical limit by MHz emulation speeds.

But beyond these hopes for improved tool efficiency, the user must be prepared to accept a temporal granularity that is coarser than a single design clock. The temporal interval must be small enough to identify peaks in power consumption, yet large enough to permit power analysis of emulation results to complete in a day or two, and preferably overnight. EDA vendors must provide designers with the option to trade off power estimation granularity versus processing time to best suit their needs during the different phases of SoC development.



Leave a Reply


(Note: This name will be displayed publicly)