A system-level power model would enable power architecture planning. EDA, IP and systems companies are coming together to determine where to go next.
Power analysis, architectural exploration and optimization of an SoC is a hot topic of discussion today.
It is well accepted this must be addressed at a higher level of abstraction because not just the hardware must be taken into account with power intent and power management structures. It has to be viewed from a system point of view, as well, where the hardware resides along with the operating system running on the processors, the firmware running the context of the operating system controlling power—and even the applications running on top of that and requesting resources from the operating system, which then mediates when power is turned on for different devices etc.
“When you are in system planning, you really need to be planning power architecture just as much as functional architecture,” said Bernard Murphy, CTO of Atrenta. “Today there is no standard to support this. You have to ad-hoc model the power architecture at the system level and then wait to map that to power intent implementation when you get to RTL, because UPF (and CPF) only connects to RTL. So it makes sense to be able to define power intent at the system level as something that could carry over to implementation for refinement, rather than a re-write. This would then potentially encourage support in virtual modeling tools and high-level synthesis tools.”
Further, he asserted that power models are needed to estimate the impact of those architecture choices on energy. “If a standard were to define a format for IP-level power models, that could stimulate both characterization tools and high-level estimation tools. There was some talk of this being covered in the 1801 roadmap, but now I hear rumors that it may not be and will be deferred to Si2 or a new IEEE standard.”
At the same time, Erich Marschner, verification architect at Mentor Graphics, noted that you can do all you want at the hardware level, but until you take advantage of the software level it really doesn’t matter. “You can have all kinds of power management capabilities in place, and if the software screws it up you going to waste power all over the place.”
To this end, a new subcommittee under the IEEE P1801 working group is now looking at system-level power modeling including EDA, IP along with new participants including Intel, Broadcom, Microsoft and a couple of other companies looking at the problem from the system-level where they are not starting at RTL. Instead, they are starting at architecture level modeling and power analysis based on the interaction of software and hardware, he said. “That’s really where the big win is going to be in terms of power optimization and reduction. The kind of reduction we can do at the RTL level or gate level is very local and opportunistic. The kind of reductions you can accomplish at the system-level are much more pervasive and have a much greater scope and potential advantage. This is the kind of thing that is going on today by system developers around the world who are building processor-based systems of all sorts, not just portable electronics but even wall-powered electronics that also consume a lot of power.”
Alan Gibbons, solution architect for low power system design at Synopsys, agreed, “We see a need to do a drive for energy efficiency of the system where the system is the hardware platform and a number of software levels that sit above that. We think as an industry we’re pretty good now at implementing a piece of IP with a whole variety of low power techniques. We can optimize the IP but what we haven’t really addressed is what decisions we make in architecture, hardware and software architecture for energy efficiency. We’re putting IP together but we’re not necessarily looking at the energy efficiency of the hardware architecture itself.”
To do that, power abstractions of the system model that are abstractions of power need to be injected into system simulations, into hardware exploration tools, and that sort of thing, he said. “We somehow need a notion of what the power profile is or what the energy profile of the system looks like under representative software load and canned scenarios or work graphs or task graphs.
Where we are today
In terms of system-level design techniques today, there are a number of them that do not consider power and are aimed at determining if the architecture will hold together, if things like bandwidth analysis and performance analysis are possible, to concerns such as finding out if the software is functionally correct, Gibbons observed.
“There’s a lot of support in the tools today to find and identify software bugs, functional bugs but software can have a whole load of energy bugs, as well. The software may be functionally correct, but does it shut down my Ethernet block when I’m not doing any Ethernet traffic? Does it dynamically size and enable hardware resources when it needs to? Although it may be functionally correct, it may be wasting huge amounts of energy. We need to catch that in system. We need to enable software developers to do that work. Can I find these bugs? Am I trying to write to a port that is disabled and I don’t know that in hardware – that ‘s a very difficult bug to find. Why did I get some kind of abort in my system? When you’ve got nothing but a board it’s difficult to find, but when you’ve got a virtual prototype with some kind of system simulation you can catch that quite easily. That comes down to system power management. You haven’t enabled the resources when you need them, or you haven’t shut them down when you don’t need them,” he continued.
An important goal is to find out what the power model needs to look like, he said. “How granular does it need to be? For a CPU is it good enough to do active I/O shutdown, or do we need to know that SIMD engines are enabled and other things are enabled, and how much granularity in power state do we need? That comes back to the use case. If it’s the software guys, they really don’t care so much about that final effort — they need to know, ‘Am I in idle, am I in active, am I in some drowsy mode or turbo mode?’ The hardware guys probably care much more about peripherals of my CPU are enabled so they can do a more accurate analysis. Our goal in the power modeling/power abstraction world is, ‘Can we come up with a modeling approach that supports all of these abstractions or do we have to do what we do in functional, where we have a loosely-timed functional and we have a power model that goes with that, and an approximately-timed and a more granular power model until we get down to a compiled RTL model with an extremely granular power model?’ The problem then is simulation speed drops so much it’s unusable.”
Koorosh Nazifi, engineering group director for low power and mixed signal initiatives at Cadence, suggested this is not necessarily a new topic. “There has been a growing interest in terms of extending exploration and capabilities in regard to power estimation at the system-level and above, but there has been too much dependency on the power modeling. How do you represent power? Who is going to generate those models? Are they parameterized models? Are they static models similar to .lib that read, re-characterize and then store? In my mind some of those system capabilities haven’t been fully resolved yet. Obviously there has been activity within the Si2 for number of years, with contributions from IBM a few years back in this particular space.
While Cadence has not taken an active role in terms of driving those activities from a technology contribution perspective, the company has been participating in it, observing and monitoring the activities. But its focus has been at the RTL to GDSII so far. “It is probably an appropriate time for us to start thinking about how to enable this for hardware software co-design, prototyping factoring in power analysis and architectural exploration in respect to power.”
Other concerns include whether or not there are other ways to achieve the same things without a standard.
Murphy said that because SoCs keep on being released, “Yes, this is being more or less hand-carried. Will a standard make life significantly easier for everyone? Hard to say. There is a bit of a danger we may be considering a standard before viable solutions have emerged, and then you would have to wonder if a standard might slow rather than stimulate progress. That said, both power intent definition and power modeling definition are absolutely needed. Only the timing is unclear.
Further, Nazifi pointed out, what is being discussed within the industry right now is what can be done in terms of enabling more intelligent architectural exploration at the earlier stages of the design and taking into account not just timing but also power. “That is to me is a bigger problem – it has dependencies that have not been completely defined and agreed upon. There are ecosystem enablement aspects of it which go beyond the standardization – it hasn’t even been discussed.”
This would include, he continued, how the power models will be enabled. “If you are able to create the power model in such a way that it becomes a parameterized model which can take various inputs such as frequency, voltage or different aspects of the system level power of the use model and then calculate the power consumption on the fly, then there is no dependency on the ecosystem. On the other hand if it is more of a .lib type… If you do that then it means that somebody has to create those models. If you are talking about system level, these are predefined blocks than fine – you can apply it. If they are not predefined and do they are things you have to sort of create them becomes much more difficult.”
This is just what the IEEE activities will cover Marschner explained.
Specifically, system-level power modeling activities currently within the IEEE include the following:
Next steps
Gibbons added there is no commitment that a system-level power modeling standard will absolutely be in conjunction with UPF. It’s just that the working group/subcommittee is trying to say, “We have a standard today. Can we morph that a little bit into enabling system-level power, and if the gap to do that is small, then it is worth revving the standard to say add these six things, half a dozen new commands that implementation will ignore and verification will ignore but the system-level tools can start taking advantage of them to create models. If we find to do it we need a complete new set of commands – there is no overlap – then it may become something else. At the moment the focus is, ‘Do we all agree on what the power model is, do we all agree on the methodologies and use cases that we have to support?’ That is tough because you bring the software guys into the room to talk about power and they say, ‘Oh I’m software – you need to be talking with the hardware people.’ They are learning, but they don’t quite realize they are king when it comes to power efficiency. We can have the fanciest IP and we can implement it very well from a low power perspective in the CPU core, but if the software never shuts it down…”
The subcommittee meets again in September and work is ongoing to complete some prototypes before committing to the standard. Those prototypes will be built with the help of system architects from Intel, Broadcom, Microsoft and others that are working on defining the subsystem that is going to be the test vehicle. Then, EDA and IP vendors will prototype some models and potential extensions to the language, get the host of simulating on a design with the software stack.
They hope to have some basics in place by September, including CPU models and fundamental pieces of IP to enable basic level scenarios.
The first milestone is to answer the question of whether this a go or no go. “Is there a fundamental limitation of TCL or the UPF approach that prevents this going through. The common belief at the moment is we don’t see any so now it is a case of proving that. Then as we go up further into application level and beyond there may be– we really don’t know– a limit to how high we can model and UPF without starting to stretch UPF too much,” Gibbons added.
At the end of the day, Marschner concluded, “what matters most is not adherence to the standard. What matters is getting chips out the door, and ultimately that’s what our customers are concerned about and we need to be concerned about. At the same time, we have to show that adopting a standard flow has advantages and that it will move the entire industry toward a more sustainable position where there is less custom software development required to support the huge number of variations and we can all focus on more productive, focused development and capabilities that will benefit the whole industry.”
[…] S-L Power Modeling Gains Steam […]