Three Common SoC Power Management Myths

Approaching power management from the architectural design level.


SoCs are power-sensitive. Sometimes SoCs are sensitive because designers worry about the impact they have on the end product battery life. Sometimes designers worry about the effects of dissipating too much power on packaging and thermal issues. Sometimes designers are simply worried about the massive cooling budgets of data centers generating heat from thousands of their chips running in servers, storage, and networking gear.

Increasing functional integration results in higher levels of power consumption; it’s inevitable. And, it’s imperative that we figure out mechanisms to reduce the amount of power being used by SoCs and the end products they enable for these reasons and for one other very obvious reason – energy is a finite resource. Sustainability and conservation are everyone’s responsibility including SoC designers – that’s power-sensitive design!

In my 20+ years working in the semiconductor IP industry as a SoC architect, I’ve found that designers hold three common myths that cause them to resist performing architectural power management.

Myth 1: It’s OK to wait to consider power savings.
Designers typically wait until the end of the integration process to begin considering architectural power savings opportunities and applying power savings techniques. Designers think, “the logic synthesis tool will help us” or “we’ll be able apply techniques like sequential clock gating.”

What we know from the last 20 years of academic and industrial development is that the biggest leverage in trying to save energy is to apply power savings techniques at the architectural design level. That’s because at the architectural level we uniquely know the characteristics of the application and its algorithms, and we can choose power management techniques that take advantage of the idle moments inside the circuit.

Sonics MayLPHP power optimization abstraction levels

Power optimization potential at different levels of design abstraction. Source: Accellera ISQED 2009.

Myth 2: Thrashing defeats power savings.
Power thrashing results when a design initiates a transition to a lower power state, but before it gets to reap the benefits of being in that lower power state, it receives notification to turn back on. The worry here is that the design has expended more energy in the transitions to and from the low power state than it actually saves while in that low power state. Thrashing is a real problem with conventional techniques. However, on closer inspection of the problem, you quickly find that for a large number of the building blocks that designers use in their SoCs, it’s very easy to avoid power thrashing if you have the right power management technology. Significant power can be saved and thrashing avoided by enabling power transitions to happen faster – several orders of magnitude faster!

Myth 3: Applying architectural power management techniques is difficult and risky.
What we know from the experience of the companies developing the most advanced digital SoCs on the market, for example application processors or cellular phone modems, is that today, these architectural techniques are indeed practiced safely by large, expert power design and verification teams. Saving power has been an expensive proposition involving considerable engineering resources. What’s needed is a solution that makes these techniques available to users who don’t have huge design and verification teams to apply to the problem.

If you want to learn how to dispel common power management myths held by your design organization, please request an invitation to one of our upcoming seminars. These seminars will introduce a new category of power management technology called the Energy Processing Unit (EPU) and share real-world use cases on the potential for substantial energy savings.