UPF 3.0 Moves Toward Ratification

Additions include system-level power analysis and modeling, component-level modeling, improvements for transition states, better hierarchy.

popularity

UPF (Unified Power Format) 3.0 — the fourth incarnation in 10 years — is moving closer to the IEEE ballot process.

Erich Marschner, verification architect at Mentor Graphics and vice chair of the IEEE 1801 working group, explained the working group is as close as possible to being on schedule for getting the document completed and ready to go on time. And while there are items still being working on, the five areas that are already in the draft document are as follows.

1. Addressing known issues with UPF2.x, which are captured in the Mantis bug database.
 
2. Adding an Information Model (IM) to support query functions and package UPF. This defines what information is captured by UPF specifications, including objects and their attributes, and relationships among objects. The IM defines general API functions for accessing such information. It maps general API functions to HDL, Tcl bindings for both HDL for package UPF (for writing testbench code for simulation), and Tcl for query functions (for debugging power intent as it is applied to the design).

3. Successive refinement methodology that concerns how UPF is written and worked with. The concept originally was promoted in UPF 2.0 by John Biggs at ARM, coming from an IP-centric perspective. Marschner explained the basic idea is that when you use an IP block in a power-managed/power-aware design, you have to use it in a very particular way; otherwise it won’t function correctly. This has to do with how you carve it up into power domains, which registers and which state elements get their values saved, and how they have to be saved together or separately. There are constraints on how you can use an IP block in a power-managed design. ARM came up with the idea that you can specify constraints in UPF that would apply to an IP block. Then you should be able to check those constraints are met when you use that IP block and other IPs in an IP-based system, and do the verification of the power management architecture at an abstract level where you’re dealing with technology-independent information such as power domain partitioning and control logic. Then, only after that’s done, you can move on to providing the implementation detail for how you’re going to build this, what libraries you’re going to use, how you’re going to build up the supply distribution, where you’re going to put the isolation cells, and then verify in the context of technology-specific implementation detail.

“The basic idea is to recognize that power intent is incrementally developed as smaller parts of the design get composed into larger parts, and also that you want to verify things incrementally so that you don’t put off verifying everything until the end—where you might find that you completely architected your design the wrong way because you violated some fundamental constraint,” he explained.

While this was conceived in 2009, it wasn’t fully understood at that time, Marschner noted. “In the meantime, we’ve had a lot of experience — especially ARM — using this methodology with UPF 2.0 capabilities. And as a result we’ve had some feedback about very specific things like how you should use the commands in UPF at the different stages for specifying constraints on IP, specifying system configuration in an implementation- or technology-independent way, and how you partition the UPF so that you can accomplish this successive refinement and successive verification steps in the way it was envisioned originally.”

One aspect of successive refinement is that early on the engineer may know what the power states of an object are at a fundamental level, but as the system is developed the power state definitions need to evolve for the objects in a system and the composition of the system as the design goes from small, simple objects to more integrated and complex objects.

As a result, there was work done to look at how power states are defined in UPF in order to support that evolution in an ideal way. A large part of that has to do with the fact that there are places where power states need to be defined abstractly, and others in a very specific, detailed way. This leads to the fourth activity area.

4. Power States/Transition refinements activities clarify the relationship between and use of mutex and non-mutex power states to enable identification of unique power state at any time and allow for abstraction and refinement in the definition of power states. This also defines a new kind of refinement that allows for branching refinement
as well as supports power state definitions required for power modeling.

5. IP/component based power modeling extensions for energy-aware system-level design. Marschner said several years ago it was decided to extend beyond what UPF was doing at the time, which was RTL and below, and start thinking about power intent at the system level. “What we chose to do — and there are several system-level power activities (http://semiengineering.com/an-update-on-the-ieee-1801-2013-unified-power-format-standard/) now within the IEEE — the part we chose to focus on within 1801 was modeling the power consumption of a component such that you could take the component power consumption model for each component in the system and use that as the basis for evaluating the power consumption of the whole system. We are exclusively not trying to focus on modeling the system as whole. We are focusing on component-level models, but we have to model the power consumption of the component so that it can be used in system level power analysis.”

What that essential comes down to is taking the definition of power states, which existed in UPF all along, and which has been refined over time, adding to it the capability to express the power consumption in terms of a function. That essentially is the function of system parameters such as temperature or voltage of the rails involved, or clock frequency, or even event frequency activity rate, for example.“This is done so that for any given component,” he said. “We can say, ‘Suppose I have a component that can be in one of five different power states at any given time.’ We can say that the power consumption for this power state is XYZ based on these parameters, and we can define the function that will call out to some implementation of the function somehow, might look up data in the database, it might compute based on some proprietary formula. In any case, it would return power consumption for this object in this state, and that power number then could be rolled up the hierarchy of the design or rolled up the hierarchy of supply distribution back to the primary supplies of the design to allow the power consumption for a particular object to be estimated, and accumulated across the design in whatever way the user wants. This basically allows EDA tool developers to build systems where we can model a system, use UPF power models for the components of the system. And then, under some scenario, whether it’s a simulated scenario or a statically defined sequence of operations, compute the power consumption of the different elements in the system over time. We can display that power consumption in some graphic form so the user can get a sense of where the consumption goes way high or what the average power consumption is or what the bottlenecks are in how the system works.”

This involved expanding UPF to work with SystemC because it deals with abstract, transaction-level models at this point — and then adding in all the power functions.

“This is also one of the touch points with some of the other standards activities: IEEE P2416 is about characterizing power consumption for components, and that activity may well provide information that’s used in these power expressions for the power states of an object. Then, IEEE P2415 is about the interaction between software and hardware, so it may end up using the power states that we define for power consumption models of a component as part of its predictive capabilities for software,” Marschner added.

John Decker, solutions architect at Cadence, pointed out there are still ideas being shared between CPF and UPF, but stressed that there are some new features in UPF that will help designers. One is getting a firmer concept of hierarchical design. “Today, UPF doesn’t have much of a strong hierarchy concept — it’s a black box model — but when we’re using this in actual design flows, there’s a lot of bottom up implementation flows while most of the verification is done top down.”

He noted the 2.0 spec leaves a lot of opportunities for that top-level power intent to change what the lower-level block would see if you did it in isolation. There is a disconnect between the two, and that can cause a lot of functional problems. “One of the big concepts is a much stronger hierarchy concept so you can define a block to say, ‘I’m going to implement this block outside in its own context; even in a top level flow, I’m going to treat it in the same exact way.’ That’s going to be a real boon for a lot of design application flows.”

Of course, another big area is the move to enabling system-level power analysis. From Decker’s perspective, this is going to start enabling a little bit more standardized way of modeling some of the system-level power. “It’s the first step. It’s not the whole picture but at least it’s moving in the right direction.”

Yatin Trivedi, director of standards and interoperability programs at Synopsys, pointed out the continued evolution of UPF shows some important things. “The most important thing is, as one would imagine, why would you enhance a standard? You would enhance a standard because there is a community that is using it. The community has vested interest in keeping this standard growing, keeping it evolving. Many standards reach a stage and then something else has overtaken it and therefore people lose interest. The fact that it is going through another revision means people are looking at it and saying, ‘Here is a base on which I can do more things, so let me add those.’ The most important thing is that the community has interest.”

Echoing Decker’s statements, Trivedi said the second significant thing that’s happened with UPF 3.0 is that the features that were covered in UPF 1.0, 2.0 and 2.1, were at the lower level in the sense that there was essentially nothing for describing an IP block’s power intent and therefore, how a simulator, synthesis tool or DFT tool would make use of it. “In that context, we have now come up another level, and that’s the significant feature of P1801 that’s about to go to ballot—the system-level power (SLP). If you think about system-level power from a hardware engineer’s perspective, they are looking at it as how to model the IP blocks, which themselves are fairly complex, and pass it down to the system-level designer so that all the power requirements and power consideration can be consistent across that SoC integration which includes IP blocks and some homegrown logic.”

He also made note of some of the users in the IEEE working group that participated, including ARM, Intel, Qualcomm, Broadcom and Microsoft, which are considered core members and are quite vocal and active.

Interestingly, Bernard Murphy, CTO of Atrenta, noted that the company recently completed a customer survey of SoC needs, among whom the topic of SLP came up frequently as an emerging must-have priority. “You can estimate implemented power today at the RTL or gate-level, but only for a tiny load sample, which is no help in estimating energy drain under realistic loads. That’s what you need in order to know how long the battery will last. And you definitely can’t figure out peak-power which affects temperature and therefore reliability. All you can do is to build silicon, measure real power while running software and hope you are close to spec. If not, you have to go back and re-design the power intent.”

He explained this obviously is not an ideal approach to power optimization. “A better approach would be to model power reasonably accurately while the design is still fluid enough to allow for power intent changes. Gate-level and RTL power modeling need massive activity files detailing changes on each node. System-level models could work with much smaller amounts of data, making them much more effective with more abstracted functional models (spreadsheets or TLM for example) or with hardware modeling (emulation or perhaps even prototyping).”

That said, he expects SLP models to start appearing from the IP vendors first. “They already need a more standardized way to communicate power behavior to their customers, especially for architectural modeling. It will take more time for modeling to move to in-house legacy IP. The problem there will be lack of accessible / current expertise to understand the underlying functionality in sufficient detail to build those models. I expect this will drive new automation to ease model development.”

Mark Milligan, vice president of marketing at Calypto agreed a significant addition in UPF 3.0 is system-level power modeling. “Power states of a system used be inferred from various sources, which can be explicitly defined now using UPF 3.0. First of all, this makes interpretation of states and transitions uniform across the tools. We think UPF 3.0 is an evolutionary change. We think verification tools will be immediately affected by this change, but implementation methodologies will take time as designers become comfortable with UPF3.0. Overall, it is a needed change to the industry.”



Leave a Reply


(Note: This name will be displayed publicly)