Powerful New Standard

Part 2: The version of IEEE 1801 enables complete power-aware flows to be constructed using a meet-in-the-middle concept. But when will these flows become available?

popularity

In December 2015, the IEEE released the latest version of the 1801 specification, titled the IEEE standard for design and verification of low-power integrated circuits, but most people know it as UPF or the Unified Power Format. The standard provides a way to specify the power intent associated with a design. With it, a designer can define the various power states of the design and the contexts associated with those states. That information can be used for modeling, verification and implementation. Part one examined the history of the standard, its new capabilities and the growing concern for power-aware design practices.

One of the more important aspects of the new standard is the ability to perform successive refinement of the power intent. While it may appear this is for the benefit of a top-down Electronic System-Level (ESL) flow, it is also an enabler for the IP industry. “IP providers would like to be able to specify critical aspects of power intent for their IP without predicating low power implementation detail,” says Alan Gibbons, principal engineer at Synopsys. “IP vendors are starting to consider rolling out IP with associated UPF-based power intent that supports a successive refinement methodology, and EDA vendors are also working on enhancing their tools in support of this methodology. It is still early days for successive refinement and little, if any, commercial IP has been deployed this way yet, but it is an approach that could gain wider adoption in the coming years.”

Others are hopeful about the rate of adoption for this capability. “This will be a real enabler for people to be able to deliver IP more efficiently,” says John Decker, solutions architect at Cadence. “It will probably be one of the features of the new spec that will see quick adoption. With the concept of the terminal boundary, it basically says, ‘This is what the block was built for and when you instantiate it, make sure the top level understands it and does the appropriate isolation at the top level.'”

Preeti Gupta, director for RTL product management at Ansys is in agreement. “We expect to see even greater reuse of IP as a result of this standard. Anything that enables the ability to pass power constraints for IP handoff and to be able to build up a system with all of these IP blocks integrated is something we want to enable.”

It also would make it easier for people to select IP based on its power suitability for a particular application. “Power capabilities have been on the requirements spec for IP for several years and they are looking at this as a selection criteria,” says Decker. “There is still a lot of work to do in the development of power models so they could easily plug into their system to analyze how well an IP would work in that system, but we already supply multiple power profiles for blocks.”

Tool rollout
With so much new capability enabled by the updated standard, questions quickly arise about roll-out strategies and the rate at which enough capabilities will become available by each vendor and in each tool necessary to put together complete flows. The first obstacle to adoption is voiced by Drew Wingard, chief technology officer at Sonics, who asks, “When are they going to publish it? I keep looking but it is not there. I have seen a draft, but that is not enough for me to do anything with.”

Dennis Brophy, director of strategic business development for Mentor Graphics responds saying, “The spec was approved but has not been published yet and we are trying to get that done as soon as possible. My hope it that by DVCon copies will be available.”

Wingard also cautions about adoption rates. “Just building a format to capture the information doesn’t make the models appear, so we have to wait until enough customers have pushed for enough models to exist before it all works. This will continue to be a substantial issue, but it is a place to get started and breaks down one of the barriers to adoption.”

Viable flows require both models and several tools that have to be upgraded. “You may want to think about tools becoming power aware – simulation, debugger, , – any tool that should be aware of the power infrastructure that changes behavior,” says Ellie Burns, product manager for Mentor. “Those are the ones in the verification world. It is important that the entire tool chain becomes power-aware.”

It appears as if the developers of the standards have learned a thing or two from the past and built the new version of the standard in a way that will ease tool development. “The transitions from UPF 1.0 to 2.0 took time because some concepts changed,” points out Arti Dwivedi, senior product specialist at Ansys. “For example, we saw the migration from supply nets to supply sets. That required a change in methodology. From 2.0 to 3.0, the transition should be faster because they are more aligned in terms of concepts and constructs.”

Others agree. “The ability to reuse key constructs and principles from a mature, stable standard greatly eased the introduction of the necessary UPF extensions for system-level power modeling,” adds Gibbons. “UPF 3.0 now offers us the ability to use the same language in a top-down approach during hardware and software architecture exploration and system-level design in general, as well as in a bottom-up fashion for the low-power implementation and verification of IP.”

In many cases, it takes several tools to be upgraded in order to make a flow, and that will certainly be the case when we think about new top-down ESL flows becoming available. “System-level power modeling will require time to bring to market and get people used to,” says Decker. “Several EDA vendors already have pieces of the bottom-up flow enabled in their tools, and the standard is really making sure that there is consistency in the industry with a common look and feel between them. Customers will be using this within the year. System-level modeling is a lot harder to say. There are pockets of intense interest but it is not as broad demand. That makes it harder to predict when it will become mainstream.”

While there are some customers pushing hard for these new features, the industry may not be fully on board yet. “There are a set of folks who are ahead of this and they will be happy to use it as soon as they can,” says Wingard. “The next set includes people who have not been doing much with UPF and that is the larger community. Adoption will be based on how effectively we have put up best practices, methodologies and flows for the rest of the market to be able to apply the technology.”

Others see slow uptake for the technology. “If you look at usage today, a good percentage, and perhaps a majority, of our customers are writing at a UPF 1.0 level plus some features from 2.0,” says Decker. “The 2.0 spec was released in 2009 and customers have still not fully adopted it. The 2015 spec will take longer.”

It is not just slow adoption, but implementation that slows adoption, as well. “There are plenty of EDA vendors who have not yet caught up with UPF 2.2,” says Wingard. “My prediction is that everything that has to be done for the implementation level will be there and that the further away you get from implementation, the spottier the implementation will be.”

Wingard also believes that the rate of standards development may be a little too fast. “I hope that we can live off UPF 3.0 (IEEE 1801-2015) for a long time. It might be a good idea to slow down the rate of change of the spec for a while.”

And yet some in the industry believe that the standard is not yet finished. “The larger problem remains unaddressed,” claims Vinod Viswanath, director of research and development at Real Intent. “Today, when SoCs are optimized for specific applications, the optimizations are done in isolation without utilizing the knowledge of the workloads. Due to the lack of hardware/software cooperation in power management, the platform as a whole cannot anticipate power requirements of the application ahead of time and, instead, has to perform power management reactively.”

Mentor’s Burns points out some of issues associated with the creation of the ESL flow. “UPF is power modeling and its purpose in life is to do power gating, so it is the infrastructure needed to shut down and control leakage current. In the ESL world, we have to control both leakage and dynamic power, so they will need to have models that flow down to implementation for all parts of this.”

Some work is going on within the Accellera to be able to define scenarios that could be used to drive power optimization, but they are not yet looking at power as a primary driver for their requirements. “At the workload level, the application needs to specify what kind of power requirements it has,” says Viswanath. “This translates to compiler-level constraints and then OS constraints and so on all the way to the RTL and gate-level designs.”

The generation of the stimulus is just one part of the problem. “To estimate power for live applications you need to be able to take in large vector sets and to be able to do analysis for these,” says Ansys’ Gupta. “This may require a second or more of simulation data. With UPF enabling system-level power and the ability to bring in live vectors, we feel that more people will want to estimate the power for these live applications early on.”

Mentor and Ansys have developed an interface between emulation and power analysis tools. “While this is not completely UPF-centric, the way it ties in for system-level power is interesting because the emulator does not have to do a low-power aware simulation,” says Gupta. “It does not have to understand all of the concepts of UPF. It can produce the raw activity and a power tool can take apply the necessary constraints. We can do what-if analysis at the system level and this will lead to power related design decisions as well.”

Viswanath sees the needs for information migration in both directions of the flow. “Lower-levels need to provide information up all the way to the OS and applications levels so that they can respond to levels of power or heat in the hardware and within the tool flow for refinement and implementation. There needs to be a constraint solver/power manager that can combine the feedback with the spec to generate the next set of optimizations.”

Other dimensions of the problem involve heat and packaging. “There is still a lot to be done in terms of power modeling in general, be it in terms of common modeling or in terms of a model that can be used with the package,” says Gupta. “We have taken the approach of model exchange between the various levels and all of these are currently proprietary Ansys models. There is a lot of scope there, but it is not clear if this will come under the scope of UPF or others. There is a lot left to be done in terms of power modeling and standardization.”

Viswanath is in full agreement. “The objective of thermal management is to meet maximum operating temperature constraints, while maintaining performance requirements. Typically, thermal control policies manage power consumption using dynamic voltage and frequency scaling (DVFS) and can be targeted to power density reduction because it has the effect of reducing overall temperature. Thermal control policies using (DVFS) avoid violations of temperature bounds by transitioning processors into low power modes to cool down, but obviously this incurs a performance penalty.”

may be the biggest overhaul that the design process has seen since its early inception, and many pieces of the puzzle still have to be solved. There are already several standards bodies involved and multiple groups within most of them that are each working on a small piece of it. At the end of the day, the industry has to hope that common sense rather than politics is the glue that holds all of these together such that all of the pieces play together.



1 comments

Kev says:

Given that SystemVerilog still can’t do current and voltage together on a wire, I’m somewhat sceptical when people claim fixes in power intent and verification, there are just too many elephants wandering about the room.

Leave a Reply


(Note: This name will be displayed publicly)