Power Modeling And Analysis

Experts at the Table, part 1: Are power models created early enough to be useful, and is that the best approach?


Semiconductor Engineering sat down to discuss power modeling and analysis with Drew Wingard, chief technology officer at Sonics; , CEO for Teklatech; Vic Kulkarni, vice president and chief strategy officer at ANSYS; Andy Ladd, CEO of Baum; Jean-Marie Brunet, senior director of marketing for emulation at Mentor, a Siemens Business; and Rob Knoth, product manager at Cadence. What follows are excerpts of that conversation.

SE: It has been said that power modeling and analysis remains the most underdeveloped solution in the design of SoCs. Is that true?

Knoth: It is fertile ground for development, but I would not say it is completely undeveloped. There is good technology, there is good work out there, and customers are making use of it. Is there room for improvement? Sure. But it is not a wasteland.

Ladd: A lot of solutions are out there. We have been looking at, and offer analysis and improvement at the back end of the flow. But there is not a lot of technology that has been able to move that far left and allow designers to do optimization earlier in the design flow, where there is more opportunity.

Brunet: The biggest problem is that there is a lot of optimization at the back end, on the gate-level netlist. But nobody questions if the testbench is meaningful. We see many customers not using a testbench anymore. Instead they go to an application platform, run on an emulator, and compute switching activity with something that is real.

Wingard: The problem is that too much of the design is frozen at the time you can get there. This is far too late for the biggest opportunities to save power. You need to do this at the architecture level. This is fundamentally frozen by the time you can get to an emulator. So I believe that the most fertile ground is to start even earlier because the place where you have the biggest chance is where the design is still the most fluid. You really want the insights from the application at the beginning.

Bjerregaard: People talk about precision and they complain that you don’t get enough precision at the front end – well maybe the exact precision doesn’t matter as much as you know you are driving it in the right direction – relative. Secondly, when people look at the back end, they have great precision. But if they don’t have the right use-case, the right scenario, then even if you can get the precise power number it doesn’t matter. There has to be a balance between what is expected and what is required.

Knoth: I am seeing a lot of people taking advantage of a combination of , where you are looking at the algorithmic level, but then hooking that up to emulation to get better RTL . That way they are making quantitative decisions about the architecture.

Kulkarni: I differentiate between analysis, or estimation of RTL, versus reduction. For example, when you look at the architectural level, the opportunities are huge. But once the RTL has been created, the turnaround time for synthesis, looking at classic gate-level power analysis is too long. With RTL the iteration loop is less than 30 minutes. This is with relative accuracy. However, one is able to make decisions in terms of power reduction indirectly. This is analysis-driven power reduction. Clock gating is the big hammer. And then there is an innocuous engineering change order (ECO) that is introduced, which causes all kinds of power consumption. Best practices are to look for hotspots and do some what-if analytics around those. We found that ECOs can cause power spikes to become introduced, and that is where power analysis can help. We find people look at clock gating efficiency and then clock gating enable efficiency. Then on the reduction side, there is analysis-based reduction, but the big hammer here is architectural decisions where there is a new area of improvement that can be done.
Brunet: It is really common to achieve about 10% to 15% savings using RTL power analysis. But we can do better than that.

Wingard: I agree that it is good enough to understand if you are in trouble. My experience says that the best approach is to look at results from the past chip. There you have some real RTL. With that data you can make some reasonable relative estimated about what is going on. The interesting part of the flow is when you have identified the place where power is more then you want, what do you do about it? The data from the past chip iteration may not provide the insights to help you figure out what to do next.

Knoth: Or where the problem is and how to fix it.

Wingard: Fundamentally, it boils down to looking for idle time. Most of the technologies we have been building have focused on how we execute active time. What we need to do is find the idle time. That is where the opportunity lies, and then we can get the insights about what to do. If we can find and classify idle moments, we can determine their periodicity, their expected length, and then determine the amount of power that could be saved by going to a different power state or apply clock gating. Sometimes you may want to do something even more aggressive. How do you make the tradeoffs and decide if it is worth introducing the physical overhead of power gating. This is an architectural design challenge and I don’t think we have enough modeling support for that.

Knoth: The key there is that it is not just a matter of architecture, it is not just analysis – it is the whole system. It needs the software, it needs stimulus, it has to be a real-world working example.

Wingard: Sometimes. If I am trying to power manage a processor I agree with you. If I am looking at the accelerator that is helping to perform a function, software’s role may only be setting it up and telling it to go. There I don’t need quite as big a scope or information. There is a high degree of synergy between the use-cases people use to determine if the chip will perform – performance use cases – and power use cases. They tend to have a high overlap. One thing we think is attractive is to try and make sure that what we capture for functional performance estimation and verification can be shared with people trying to do power optimization. Emulation is a great technology that can helpful there. When we talk about tools that architects use, ROI is always a concern. Is the cost of the model paid for by the benefit of using it? That has been the Achilles heel of most tools around architectural work.

SE: Do your customers ask you for power models?

Wingard: I don’t use enough power to need a model. But my job is to control their power once they have come up with schemes, so they do ask us for power models and we have excellent ways to provide them. But that is because we build our IP in a different way. We use generators, and so a power model is just a different view for the generator. But that is an expensive way to build IP.
Ladd: In a past life I worked in a company that did performance models for ARM IP. We delivered performance models and they would say, ‘Can you deliver a power model, as well?’ I would have to respond that it doesn’t exist right now. Sixty percent to seventy percent of what they were building was third-party or legacy IP, and they needed models for that. But there was a big gap.

Brunet: There is still a big gap. You mentioned doing it early on at the architecture level, but very few people do this because they can’t rely on the model. If you make architectural decisions with the wrong model, it is really bad.

Wingard: First you have to have the model. Someone has to create it and validate it to ensure it has some meaning.

Ladd: I don’t think consumers want a model unless it has been validated by the IP provider.

Related Stories
Dealing With System-Level Power
New tools, standards, languages and methodologies will be necessary to automate growing challenges at all process nodes.
Modeling On-Chip Variation At 10/7nm
Timing and variability have long been missing from automated transistor-level simulation tools. At advanced nodes, an update will be required.
Why Power Modeling Is So Difficult
Demand is increasing for consistency in power modeling, but it has taken far longer than anyone would have guessed.
Choosing Power-Saving Techniques
There are so many options that the best ones aren’t always obvious.


Kev says:

Can’t argue with “most underdeveloped”, it has been for decades now.

Usual suspects, don’t remember seeing any of them at Verilog-AMS or SystemVerilog Discrete Modeling committees actually helping develop the power modeling capabilities in standard verification flows.

It’s easy enough to create power models from PDKs and SPICE cell libraries, but there’s no support for fast AMS modeling in the digital simulators (also fairly easy to do).

DVFS and body-biasing support have been ignored since before SystemVerilog got going.

Leave a Reply

(Note: This name will be displayed publicly)