There are differing views on how to model software, but there is almost universal agreement that it is an increasingly vital part of the SoC design process.
It is commonly accepted that the higher you go in the design chain, the bigger the impact that design and implementation decision can have. While power optimization may have started deep in the silicon, the success of a product, such a smart phone, often is based on the time between charges. Batteries provide a finite energy resource, and while low-level optimization may focus on power reduction, it is often energy conservation that takes center stage at the system level.
But we need to explore even higher, to the software, and here the focus can be very different again. “We think of hardware as something that consumes power,” explains William Ruby, senior director of technical sales at Ansys/Apache. “Software does not directly consume power, but it does control how much power the hardware consumes.”
But how do you move up to the system level and beyond and still retain enough accuracy to be able to make informed decisions? “The reason to move to higher levels of abstraction is to enable architectural tradeoffs to be made early in the design,” says Gene Matter, vice president of application engineering at Docea Power. “At this point in time, the details may not be available. So, in these terms, accuracy is replaced by the concept of completeness. It still needs to include all of the power states so that they can be driven by real workloads and optimizations can be found.”
Virtual Prototypes
A model created at this level is generally called a virtual prototype, but here is where the confusion starts. There are two different types of virtual prototypes. Tom De Schutter, senior staff product marketing manager at Synopsys, explains it this way: “There are two different flavors of virtual prototypes. One is intended for system architects where you are looking at traffic, which drives a fairly accurate model of the memory and interconnect subsystem. They are looking for tradeoffs in these parts of the system. So, for example, on a mobile phone, you will be looking to see how the system reacts when playing a video when you get a phone call and there are multiple actions coming through the phone.”
The second type of virtual prototype is intended for software development. “On the software side you almost have an orthogonal view, where the interconnect and the time it takes to access something is not relevant,” De Schutter says. “You are looking for functional correctness. Fast execution speed becomes more important because you are looking at the entire software stack from end to end.”
Power modeling approaches
Up until now, these virtual prototypes have been used primarily to verify functionality and performance, but now there is a desire to use them to examine energy consumption, as well. “The software guys are looking for ways to get a better feel for optimizing power and performance of the software itself, and specifically the OS,” says Larry Melling, product marketing manager for the System Verification Group at Cadence. “We are working to help define better ways in which power can be modeled under software use cases and how feedback can be given to the software developers about how their changes affect the energy profile of the system platform.”
Docea’s Matter believes it’s important to have a power model that is separate from the functional model. “This enables traces to be created from a functional simulation and for these to be fed into the power calculation process, which will calculate both the active and leakage currents,” he says. “We thus need power models at the same levels of abstraction as the functional models.”
But there are challenges. “There have been some efforts within the industry in the areas of instruction set modeling and extending this to power,” says Koorosh Nazifi, engineering roup director for low power and mixed-signal initiatives at Cadence. “This is going on within Si2 and IBM has been bringing some technology into this. But then you have to consider how you would map that to the logic level and technology libraries to extract that information automatically—or do the IP providers have to do that? We do not have the ability today to take that and automatically generate an abstract [power] model.”
There are clearly different abstractions possible for the power model. Synopsys’ De Schutter sees the problem slightly differently. “The power models on top of [the SystemC virtual prototype] are orthogonal and in a scripting environment. So you can drive the power state from events, such as a software trigger or an interrupt that would make a processor change its power state. By doing this you create a power state machine with triggers coming from a virtual prototype, and you run these in parallel to annotate the power or energy consumption over time.”
It appears as if the two approaches would have very different levels of detail associated with them, and indeed they see the information provided somewhat differently. On the one hand we have Synopsys and Docea, which see things in a relative sense. As Docea’s Matter explains, “They will not provide numbers that can be matched directly to hardware, but you will get a tradeoff, given the same set of assumptions, that this choice will consume more energy, or that one will produce a higher peak power, or this one is more thermal-friendly and has headroom for more concurrent tasks.”
Synopsys’ De Schutter agrees. “So long as the relative trend is in the right direction it can give you a lot of value.”
The folks at Cadence aren’t so sure. Cadence’s Nazifi warns, “If you were to try and rely on an instruction set model, there is no enablement there yet to get accurate power from different types of logic or memories.”
Dynamic Analysis
There is also a divergence in the way in which the two types of virtual prototypes are driven. When making architectural choices about the hardware, it appears as if using generic workloads and statistical analysis is the method adopted by most people, while for software, actual use cases and scenarios are the preferred means of driving the prototypes. There is a significant divergence in power consumption based on the task being performed, so when optimizing the hardware architecture it may have to take all possible scenarios into account. The software, in contrast, has the luxury of only having to optimize what is written.
Frank Schirrmeister, group director for product marketing for Cadence’s System Development, warns, “You can make a big mistake if the scenario you choose for [hardware] optimization is not the right one.”
An Analysis of Power Consumption in a Smartphone, Aaron Carroll and Gernot Heiser
Two problems, two solutions
There are two tasks related to software and power consumption. The first is that software needs to be optimized to consume less power, and this requires a detailed power model of the system. The second is that software is controlling the power states of the hardware and this aspect of the software has to be verified and optimized.
Ruby sums it up quite nicely: “Part of the software is controlling the hardware and management of leakage through power gating. It is up to software when blocks can be completely shut off. Also, while the software is running, can it be optimized to reduce power?”
Both of these are necessary functions. “Power issues are complex,” notes Guillaume Boillet, technical marketing manager at Atrenta. “It is not just a matter of reducing power consumption. Power is an intricately linked world of power, energy and thermal, with tradeoffs involving area and timing and impacting hardware and software design flows from architectural decisions to transistor sizing.”
Leave a Reply