Power Estimation: Early Warning System Or False Alarm?

Experts at the table, part 2: Panelists discuss where in the process power estimation is being used today and where it is needed.

popularity

Semiconductor Engineering sat down with a large panel of experts to discuss the state of power estimation and to find out if the current levels of accuracy are sufficient to being able to make informed decisions. Panelists included: Leah Schuth, director of technical marketing in the physical design group at ARM; Vic Kulkarni, senior vice president and general manager for the RTL power business at Ansys; John Redmond, associate technical director and low power team lead at Broadcom; Krishna Balachandran, product management director at Cadence; Anand Iyer, director of product marketing at Calypto; Jean–Marie Brunet, product marketing director for emulation at Mentor Graphics; Johannes Stahl, director of product marketing for the prototyping at Synopsys and Shane Stelmach, associate technical director power and reliability solutions expert at TI. In part one the panelists discussed the current state of power estimation. What follows are excerpts of that conversation.

IMG_0189

SE: Has power become a primary design concern and do EDA tools treat it as such?

Schuth: It is definitely a primary design consideration for some types of design. For GPUs it cannot be ignored and is the limiting factor. It has to be dealt with and we have to go to to get the necessary results. Some of the responsibility has to be on the designers to figure out what vectors need to be tested, are they doing a boot, what types of execution, what applications represent their worst case. In the EDA space they need to work out how to get that through the tools faster.

Kulkarni: Ten or 15 years ago, you would have looked at timing performance as the number one criteria. As you go below 20nm things get interconnected and it becomes a multi-physics problem. Power creates noise, which impacts timing, which impacts power. It is circular. At 16nm and 14nm we are seeing that even more things are impacting each other. For example, an automotive customer was doing a design which was creating noise spike at around 65MHz and that is within the FM band of the infotainment system. When doing RTL power estimation on the architecture, they found a hotspot and this led them to find some architectural problems. They converted some floating point calculations into fixed point and this reduced the noise spectrum by 20dB. This is how the dots can get connected. We also see at 10nm, on-chip ESD is going to be a bigger issue. is another problem. It is also associated with the power issue. This is why it cannot be an afterthought. You have to think about power, not just from a budgeting point of view, but from the architectural effects of it.

Brunet: It is a challenge internally for many organization. When you think about power, it is usually about silicon. When you think about booting the OS and running live applications, you have to talk about emulation, the OS group, the software group. This makes different parts of the company talk to each other. This is happening because they have to, but it is not trivial.
Balachandran: Thermal effects are also important. Thermal is not an isolated chip problem. It is a system problem. You have the package, the board, the chip and one can impact the other. In a DAC keynote we heard that Google is putting decoupling caps on their chip for their contact lenses. They had to put it on the chip and that is not 10nm or any advanced node process. That is a really small type of application and has to be at very low cost. We are looking at a cost per chip of 2 or 3 cents. The problems here are not multi-physics, but that does not mean that power is not important. Power is everything in those devices because you are close to having to use ambient energy. If you cannot be accurate in terms of measuring it, how can you optimize it? That accuracy has to continue from the beginning to the end of the flow. Doing what if exploration is not good enough, you have to have correlated results from start to finish. This is not limited to the advanced nodes. It is important there and possibly more complex, but equally applicable to older geometries.

Stahl: It is important to consider the type of device. You may have a device that is tuned to one or two tasks where there is little variation in the software that it might run or you may have a huge amount of application software that could be applied to the device. Even using emulation, you can only do so much. This is why architects have to start at a high level with very abstract representations and explore all of the variations, with the error bounds that are found there, to make a decision to narrow the design scope. Some of these can be rerun later in the process to measure the actuals.

Kulkarni: Another big challenge we all face is that designers are creating functional testbenches and as EDA providers we use that as a starting point. This is fundamentally flawed. Two guys are walking down a dark alley and one was looking for something. The other guy asked him what he was looking for. He said that he had lost his keys. He then asked if he had lost them here, to which the other one said, ‘No I lost them over there, but I am looking here because there is light.’ Functional vectors are the light. We are trying to do power analysis and optimization with functional vectors. The key is that we have not yet invented automatic power pattern generation (APPG).

Iyer: That is why you need a flashlight.

Kulkarni: A design engineer can start with the functional vectors to find issues, but it is not the only thing that should be focused on. Not a single paper has been published on APPG. It is a very difficult problem to solve. How can you put a chip into a certain mode, which can exercise all aspects of peak power, idle power, functional power, etc.? This is a huge opportunity. Then of course, you can use functional vectors for verification and other types of analysis.

Stelmach: We are constantly looking for places to harvest the data. Emulation is one place. Linux boot – that is a good place to look. RTL simulation is another place. What you are really after is an understanding of where activity is being generated. You are looking at how it can be measured, how to gather statistics that point you toward the simulation vectors that you should be running, or distilling it down to something lighter weight such as toggle counts to drive power prediction. You are also looking for sustained power in some cases, and thermal effects are an aspect of this. Other kinds of chip failures come from rapid power change and understanding which parts of the circuit have the highest potential to generate dI/dt can be important. Traditionally we would run at very high activity rates so that we could see where the peak power potential was and then figure out which we needed to address.

Stahl: What I have been hearing in the mobile space is that companies are converging on scenarios, and they are all converging on 30 or 40 scenarios that are trying to characterize and try to take those through the entire design flow. They may use them in architectural studies, emulation – but they make sure that they cover these scenarios and re-use results from previous silicon and apply them to the next-generation architecture. You have to harvest data, including data from silicon.

SE: Are spreadsheets enough for power budgets?

Redmond: No, and I am not happy with spreadsheets, but it is something we have to do. We need early power estimates for PMU sizing, for thermal and packaging. We have to have early power numbers. The way we do that today is through spreadsheets. There should be a better way forward. For a new chip, we may be 30% off. For a design iteration we can get to about 10% to 20%.

Stelmach: So this could be used for chip budgeting process, and then you are locked into them?

Redmond: We do use-case based power. What is the device likely to be doing? If it is a phone, how long will you listen to music, how much time spent talking, how much time surfing the Web, how long spent playing play angry birds? And then we figure out the energy for those use cases and then attack the major consumers. For a set-top box, what are the use cases? Identify where most of the energy is being spent and attack those first—30% is good enough. But the amount of effort it takes to be that accurate is high.

Brunet: What happens when you have new IP?

Redmond: That creates a higher risk.

Stelmach: You are always at higher risk with newer IP. As much as you would like to think that accuracy is continuous throughout the process, if you just look at logic synthesis, for example, it often has problems deciding which Vt class you should use to solve a timing problem. It won’t figure that out until you get into physical design and implementation. If you make the wrong assumption about which Vt class is going to be selected it can change from dynamic power dominated to leakage dominated or the other way around. This is why it has to be a continuum because you are constantly getting more accurate as you go through the flow.

Brunet: It seems as if it is shifting to dynamic more, especially with finFETs, which provide better control of leakage.

Balachandran: The art of reducing power involves a dI/dt challenge. The moment you have power switches in the design, you will shut it off, then turn it on, and there is a rush current. You have to do a dI/dt analysis to make sure that the ramp is not so fast that it will destroy the chip or create a huge IR drop in another region. Even getting low power realized requires accurate analysis, and using that to design how the power switches are going to be inserted, how they are going to be chained, how you are going to wake them up and in what order – these decisions are key to making it work. It is a speed-versus-power tradeoff. Quick ramp means more current and if you suck in more current you have to manage the risks and the reliability impacts. The act of inserting circuitry for the management of power requires detailed analysis.

Stahl: I will come back to why I love spreadsheets for a different reason. They are a starting point for changing a methodology. Those who have embraced a new technology simple take the spreadsheet value and attach those values to a model. Then they execute the model as part of performance emulation. So the spreadsheet is important as a capture mechanism for knowledge and in the future it will be captured in a different environment where all the values become dynamic and executed with use-cases. Without the prior knowledge of IP blocks, there is no methodology because it is always bottom-up.



Leave a Reply


(Note: This name will be displayed publicly)