Experts At The Table: Are We Cool?

Second of three parts: Power as a differentiator; who’s responsible for which power models; exchanging information across the supply chain; worst-case scenarios.


By Ed Sperling
Low-Power Engineering sat to discuss progress in the realm of power management with Ambrose Low, director of IC Design Engineering for Broadcom’s mobile platforms group; Ruggero Castagnetti, distinguished engineer at LSI, and Andy Brotman, vice president of design infrastructure at GlobalFoundries. What follows are excerpts of that conversation.

LPE: How do we get the message out that there is frequently more designed into an SoC than is being used by OEMs—and which could greatly benefit the user?
Low: We have a long way to go. We have hundreds of knobs tin hardware that can be turned by the software to take advantage of low-power circuitry within the design. You need to optimize the software to take advantage of what is available to minimize power.
Castagnetti: No one wants to take risks in this industry. As we start seeing power limits becoming unachievable, customers are willing to go that extra step. So we are definitely seeing much more aggressive adoption of power management techniques, even in the wired space where you could argue that power is unlimited and available. But there is a challenge in the wired side that needs to be addressed.

LPE: Is power now a differentiator in chips?
Brotman: From a foundry point of view, giving our customers capabilities to model all of these effects is critical. That’s a differentiator for us. As far as the marketplace goes, customers that have better power and performance will win.

LPE: Who’s responsible for the power models?
Brotman: The base power models are modeling device performance. That’s the foundries’ responsibility to provide. We also have to provide reference flows that people can look at to take advantage of these models. How do you put this into a design that generates proper performance?
Castagnetti: That’s a starting point—having good models at the transistor level that correlates to silicon. The other piece involves high-level models. So if you have a piece of IP, what are the power numbers associated with that? That may involve the IP providers, or maybe the overall system piece. And finally, just understanding from a system-use case which is the worst-case condition is critical. In the past, we have measured in-system power and found some interesting surprises.

LPE: You’re talking about a free flow of information across a disaggregated supply chain. How do you break down the barriers to make that happen?
Castagnetti: You have to show the value of what it means to be able to correlate predictions of worst case or average vs. what you can measure so you can make a more intelligent choice.
Low: As an IC designer we provide chip-level power models to our customers so they can use this model for their platform/system power analysis.

LPE: Are you getting that kind of information back from the foundries?
Low: No.
Brotman: We’re providing models at the transistor level, parasitic extraction models, libraries and power models. The models you’re talking about are higher-level.
Low: Yes, very high-level.
Brotman: That isn’t something the foundries can provide.
Low: We’re taking all these models from our foundries and building a chip-level power model that was discussed earlier. Another step up is building a power model based on use case at the platform level, for example.

LPE: Are the tools there to do this kind of power measurement?
Low: Yes and no. We can leverage some of the tools we have today to analyze the power, but we still have a way to go with improving the power methodology/analysis. We have UPF and CPF, but they are not compatible.
Brotman: There are some tools that are adequate for doing predictions at the low level. At the abstract level, there are interim tools today for how different software architectures are going to impact performance and power. There is definitely room for improvement.

LPE: Where is the low-hanging fruit in power savings? What can we do that we haven’t done so far?
Brotman: Turn off your phones.
Castagnetti: One thing we’re doing is designing for worst case with the maximum number of corners. Maybe we need to be able to live with a design that isn’t bulletproof. Whenever we do this bulletproof design we increase the power requirements.

LPE: That’s market-specific, right? A cell phone doesn’t matter as much as a pacemaker, for example.
Castagnetti: That’s correct.
Low: The ability to turn the logic off when it’s not needed enables us to minimize the power requirement. Other techniques like dynamic voltage and frequency scaling allow you to lower the frequency when the performance is not needed. Higher VT class is for leakage reduction. Tri-Gate enables engineer to achieve performance and power with further reduction of the operating voltage.

LPE: How low can we actually drop the voltage? If we drop the voltage of memory too far we lose data.
Castagnetti: You can design chips with a much lower operating voltage. The question is how much performance or throughput you want to maintain. There is still a performance-voltage tradeoff. But fundamentally, you could switch the question around. If you want to be at 300 millivolts, how do you have to design your system?
Brotman: Yes, it’s a matter of what kind of performance you want in your system.
Low: You have to trade off that ability. Digital logic can use lower operating voltage than a memory bit cell. This operating voltage level really depends on how much performance you want to achieve at that level. It’s a use-case model and it dictates how low the logic voltage level can bring down to.

LPE: Can software be written more efficiently so we can seriously drop power and boost performance?
Castagnetti: We at least should look at that. Software designers should consider how much power their software consumes.

Leave a Reply

(Note: This name will be displayed publicly)