Unraveling Power Methodologies

What methodologies are being followed for the creation of low-power designs? One thing that is clear is that they are in a state of flux.

popularity

When working on articles, the editors at Semiconductor Engineering sometimes hear things that make them stand back and question what seems to be an industry truth. One such statement happened last month while researching a different article. The statement was:

Most designs are not top-down, but in fact bottom-up when it comes to power management. The most used methodology today is that the RTL team is given UPF by the back-end team to verify. The back-end guys are defining the power infrastructure and then passing it up to the architectural guys rather than the other way around.

Could this be true? And if so, what would it take to get designers using what the industry is touting as a better methodology?

There appears to be a divide in the community based on history and the availability of tools and flows. “CPF and then UPF originated from physical implementation because that is where power-aware design was realized,” says Anand Iyer, director of product marketing for Calypto. “Hence both formats have a strong leaning toward the backend. It has been the responsibility of the physical designers and implementation engineers to put together UPF files and pass it to RTL designers for verification.”

“There are some fairly large companies who are still driving power from the back-end,” says Erich Marschner, verification architect at Mentor Graphics and vice chair of the IEEE 1801 working group. “The implementation team inserts the power circuitry and documents what it does for verification purposes when their hard macro is used within a larger system. This is not as common as it was two or three years ago.” Marschner explains that this is part of the transition for any type of new technology. “For companies who believe that their core competency is in the area of power optimization, they are more reluctant to let go of having direct control of the design of the power aspects and more reluctant to delegate that to tools that would be driven by the UPF spec in an abstract way.”

But the industry says that the days of doing this may be numbered. “This type of flow works when the design has fewer well understood power domains,” says Iyer. “But if the number of power domains is rather large, a physical designer cannot put together UPF files anymore. Top down planning is needed, and usually a power architect takes over the development of UPF.”

“If you are a block designer, how do you know how to handle the power architecture of your block?” asks Luke Lang, customer engagement director for Cadence. “Can you define when it is going to be shut off or where the power switches are? You can’t make this kind of decision bottom-up.”

He’s not alone in seeing these problems. “If you treat power as a back-end process, you run the risk that there could be some serious functional issues later that were implied by isolation or retention,” warns John Decker, solutions architect at Cadence.

But many aspects of a design are bottom up, especially when and reuse come into the picture. “The bottom-up approach is ideal for IP reuse and the introduction of supply sets in UPF 2.0 really helps with this process,” explains Mary Ann White, product marketing director for Galaxy low power implementation products at Synopsys.

Lang agrees and points out there are two specific cases. “If it is hard IP, such as a PCI Express block, it might need to be somewhat flexible in that it could be used in a situation where they do need to shut it down and save power. In these situations, people will probably over design the block and add in power features that may not be used.” The other case is for soft IP. Lang explains that they will often provide a power intent file to go along with the block that can be integrated within the SoC. “For soft IP, the IP provider will probably provide a power intent file, but the user can reconfigure this. This may be a processor core. If I instantiate multiple of these cores, maybe one of them is going to be on all the time and others may be shut down, then I have the same core with different power intent and power switches will only be inserted where necessary.”

White agrees that the decision is not black and white. “Whether or not you want to include power management cells inside your design’s interface is your choice. There are pros and cons to doing it yourself versus having the integrator do it. If you want to be absolutely certain your design is protected, you should consider the worst case and include the necessary power management cells. However, this may result in an area penalty if the power management cells are not necessary. If you are comfortable with letting the integrator take care of the protection for your design, then have the integrator write the UPF to implement them as needed.”

But the industry is moving beyond simple power gating and designs are get more aggressive. “While power gating and other techniques are effective today, these techniques may not be enough for many low-power SoCs,” points out Hem Hingarh, vice president of engineering for Synapse Design. “We have to think in terms of design-for-power and in terms of energy consumption from the beginning, including system partitioning, architectural power management, RTL/logic design, power-aware test, and physical implementation in addition to technology and process advancements.”

This adds a new role within the design team. “Many companies are advertising for power architects,” says Iyer. “These are the guys who can put together UPF from a top-down perspective. As an EDA company, we have to look at how to lower the barrier to adoption. This means making it easier for RTL designer to adopt the solutions. As design for power becomes more critical, they will be looking for tools that can give the RTL designers the options related to low power.”

The technology is far from standing still, and major new capabilities are on their way. “In UPF 2.0, a notion of successive refinement enables power intent to be specified incrementally over time,” explains Marschner. “It starts with a system-level model and the UPF is extended over time as you integrate IP blocks. It is further extended as you define an implementation strategy for particular technologies. This concept has been around for five to six years, but only recently has enough of UPF 2.0 has been available in tools for people to try out the flow and move to a more advanced methodology.” ARM is one company that is beginning to adopt and promote this flow and they have started to deliver UPF for its IP based on the concept of successive refinement.

powermethodologies

Figure courtesy of Mentor Graphics

Some holes remain and other tools are struggling. “We see customers doing UPF verification using simulation, but here they are looking at things such as memory initialization, checking for isolation and that things are structurally correct,” says Jim Kenney, marketing director for the emulation division of Mentor Graphics. “Higher-level power scenarios involve software and these cannot be run in simulation because it is too slow and this is what is used for. Emulation allows them to model the entire chip and enables them to run the software state machine that controls the power circuitry and powers off various islands in the design.”

Emulation can also help with another problem related to low-power flows. “Power estimation is the other side of the coin,” says Lauro Rizatti, a verification consultant. “Power consumption is software dependent and so again you need an emulator to do this.”

“The current tools are first generation tools,” points out Iyer. “Designers either need to go to the gate-level for accurate numbers, which defeats the whole purpose, or they have to work with lower accuracy, which is almost no better than a spreadsheet. How can we get the necessary accuracy?”

Power modeling and the format for capturing power data are probably two of the hottest topics within IEEE and Si2 according to Lang. “This will continue for the next year or two. The problem we are attempting to address is how to build a model that supports multiple levels of abstraction and that can provide reasonably accurate power estimation at each level.”

Marschner talks about the activities within the IEEE. “Two years ago a discussion started about system-level power modeling which led to the Low Power Study Group within the IEEE. Two new groups were formed: IEEE P2415 which is about the software interaction with HW and IEEE P2416 which is about power characterization of IP. IEEE 1801 provides a sound basis for modeling the power states of a system in order to provide the foundation for this type of characterization and to provide a mechanism to define power states that a system can present to software so that it can control them. The definition of power states binds all three of these activities together and we have to ensure that these remain consistent.”

Hingarh lives with these problems every day and provides a wish list for EDA developers. The list includes:

  • Provide more help to the designer during analysis and optimization such as identifying wasted power;
  • Make recommendations for opportunities to reduce power consumption;
  • Identify missed clock gating at a fine-grained level;
  • Identify opportunities for data bus gating;
  • Identify redundant RTL code and logic gates;
  • Measure block activity (can it be powered down or will clock gating will be enough?);
  • Automated optimization can add additional power savings at the end of the design process and provide way to update UPF document;
  • Provide a way to compare RTL estimated power with post layout power, and
  • Provide better accuracy of data.

In addition to logic optimization, a lot of designs contain increasing amounts of analog and this requires attention as well. “Analog is mainly custom design,” Lang points out. “But we need to be able to integrate them into the flow so that if you have a block that includes a regulator or isolation, then the models need to capture that information and present it in a format that the upper level tools can handle.”

Analog adds a complexity layer, as well. “Power switching is a complex issue because analog blocks require technology-dependent stabilization and can have lock times in the order of tens of milliseconds,” says Hingarh.

What is clear is that the demands for decreasing power will increase as the visions for the Internet of Things, wearables and other battery-operated equipment take off and sensitivity grows about energy consumption in the server farms that make up the Cloud. EDA is likely to be driven by power concerns across the entire product range for the foreseeable future.



Leave a Reply


(Note: This name will be displayed publicly)