Most chip designs now employ low-power design techniques, making accurate estimates of power consumption necessary.
Over the last decade or so, power consumption has become a major issue in the design of many types of electronic products. Of course, power has always mattered for battery-operated devices, but the complexity of portable electronics and the size of the chips they contain have grown significantly. For plugged-in devices, from desktop computers to server racks in a data center, power plays a major role in lifetime cost of ownership. Heat dissipation, from fans to liquid cooling systems, adds to the initial outlay as well. The result is that most chip designs employ low-power design techniques and their developers need accurate estimates of power consumption.
Designers need to know the average amount of power their chip will consume. Both product cost and the competitive landscape demand that the chip stay within its power budget to be a viable product. When architecting the chip, writing register transfer level (RTL) code, and running logic synthesis, power is now a top-level concern along with performance and area. Designers also must determine the chip’s peak power, which must not exceed the physical limitations of supplying power and dissipating the resultant heat. It is unacceptable to find after fabrication that average power is too high or that peak power draw destroys the chip. Accurate pre-silicon power analysis, preferably at multiple stages in the project, is required.
The EDA industry’s traditional approach to power analysis has relied on simulation, running the same testbench and set (or subset) of functional tests used to verify the design. The simulator provides a switching activity file to a power signoff tool, which uses the power characteristics in the library for the target chip technology to provide estimates for both average and peak power consumption. These estimates rely on the particular tests run in simulation, which are not representative of chip operation with production software running. Tests designed for functional verification tend to stimulate only specific areas of the design, with limited parallel activity.
Accurate power analysis requires the switching activity from real software workloads, including multiple user applications running on top of an operating system (OS). Booting the OS, starting system services, and running applications may take a few billion cycles from reset. This is completely unpractical to run in simulation, but emulators run jobs of this size all the time, with exactly the type of real software workloads needed to obtain high-accuracy power estimates. Since power signoff tools are designed to handle thousands of cycles, not billions, the next-generation power analysis flow must identify a few areas of high activity in the emulation run and focus on these windows.
This first step in the new flow is for the emulator to produce a profile showing which parts of the design are active over time. This activity profile can be viewed as a graph within a hardware debug tool. Since power signoff cannot be performed on billions of cycles, the next step is using the activity profile to identify one or more power critical windows during which activity is the highest and power consumption is likely to be the highest as well. If each of these windows is in the millions of cycles, it can be used for the next stage of power analysis. A power analyzer tool is required to create a weighted activity model with multi-threaded power analysis engines and load it in the emulator along with the design.
The next step in the new power analysis flow is replaying each power critical window in the emulator to generate much more detailed information on power consumption and switching activity. The emulator must support record and replay so that each power critical window can be replayed by itself quickly and deterministically. The results from the power critical window replay are fed into the power analyzer, which produces a switching activity interchange format (SAIF) file with 100% accurate results for every signal in the design. The SAIF file is fed into the power signoff tool, which calculates the average power consumed during the window.
The power analyzer also generates cycle-by-cycle power consumption values for the entire power critical window. Users can view this information in the debug tool and use it to select one or more power signoff windows where highly accurate power estimation is needed. These windows are fed to the power signoff tool to refine the power analysis even further, typically at 99% accuracy when compared to the final chip. The power signoff tool can calculate peak power for the window, ensuring that power consumption stays within the physical limits for the chip. If IR-drop analysis is required, one or more event windows can be selected to run in the appropriate tool. The entire flow, from emulation to power signoff, can be run on the RTL, the post-synthesis netlist, and the placed-and-routed netlist.
The software-driven power analysis solution from Synopsys meets all the requirements of the next-generation flow described above. It greatly reduces the risk of missing critical power issues by using real software workloads in emulation to deliver average and peak power results far more accurate and 1,000 times faster than simulation-based approaches. The power analysis flow integrates with the familiar Verdi debug environment, enabling users to efficiently and accurately pinpoint and fix power-related issues. To find out more about Synopsys emulation products please visit https://www.synopsys.com/verification/emulation.html.
Leave a Reply