Determining Where Power Analysis Matters Most

Need for accuracy varies greatly, depending on where in the design flow it is used and the overall system architecture.

popularity

How much accuracy is required in every stage of power analysis is becoming a subject of debate, as engineering teams wrestle with a mix of new architectures, different use cases and increasing pressure to get designs out on time.

The question isn’t whether power is a critical factor in designs anymore. That is a given. It is now about the most efficient way to tackle those issues, as well as where in the design flow that level of detail needs to be addressed. And it is becoming increasingly important as more processing moves closer to the edge to pre-process a growing volume of data.

“This is the one that’s going to consume your battery, and what it turns into also will create heat and raise the temperature, along with other consequences,” said Jerry Zhao, product management director at Cadence. “This is why accuracy for power is the ultimate goal to consider.”

That has to be set in the context of a design flow that is pretty long, and at a certain levels people may care less about absolute power numbers and more about relative measurements. “That happens at the very front end of the design architecture, in RTL code, because that’s where the power will be still analyzed relatively accurately at its own level,” said Zhao. “But it’s not as accurate as when you come to power signoff.”

The biggest challenge of power analysis today stems from technology on the manufacturing side.

“With finFET, things get smaller and smaller in terms of nanometers, so one of the driving forces is saving power from one generation to another. When you have finFET, that’s revolutionary compared to planar CMOS, and the indicator there is that leakage power is reduced dramatically,” Zhao explained. “When you come from the early era of finFET to today’s 7 and 5nm, leakage is further reduced, but it’s still something that will always happen — and the entire industry is always looking for ways to reduce that. There are design and manufacturing ways to reduce that power, and that is where power analysis becomes more important. We have so many gadgets all powered by batteries, so we want to cut the power. A lot of times that leakage power stays the longest in the operation of, for example, cell phones. Cutting down the leakage here will have a direct impact on the life of the battery that determines how many days, how many hours you can use your cell phone. Dynamic power is the type of power that makes the cell phone hot, and this must be analyzed to capture where the hotspots are coming from and whether the localized power consumption is going to destroy your function for that piece of silicon. In both cases, accurate power analysis is desired.”

The most accurate pre-manufacturing estimates of power consumption are available near the end of the design process, but at this stage there is very little that can be done to correct the design if the power target is missed, said Annapoorna Krishnaswamy, product marketing manager for the Semiconductor Business Unit at ANSYS. “At most, you might be able to tweak the design to reduce power by a few percent at this late stage. Big reductions in power require changes in system architecture or in micro-architecture as early in the design process as possible. The challenge here is that the earlier the power estimation is made, the less accurate it is, because implementation details are not yet clear. Less accuracy means less certainty in the effectiveness of any power fixes you might make. The generally agreed balance-point in this tradeoff is to focus optimization at RTL. Absolute accuracy compared with pre-manufacturing signoff is still quite good, typically within 15%. Relative accuracy, which compares power in 2 or more very similar cases — such as the same design before and after applying a power fix — can be within 5%, similar to power signoff accuracy. The reason is quite simple—in comparisons, many of the implementation details will largely cancel, at least around those areas where RTL fixes are typically considered.”

Getting more granular with power
Krishnaswamy noted that RTL power vs. gate power accuracy is a key concern in many designs. Front-end power tools are meant to bridge the gap between RTL power and sign-off numbers by accounting for physical effects, such as clocks and wire capacitance, while delivering 20X faster turnaround time when compared to traditional gate-level methodologies. This enables designers to identify and make power decisions early and reliably.

But power estimation accuracy may not be needed during the early stages of RTL design, and this is where things get less clear-cut. “The required input such as SPEF and waveform data for accurate power estimation may not be available,” said Neeraj Joshi, director of engineering at Mentor, a Siemens Business. “At this stage, RTL designers are more interested in trending and tracking power metrics with each RTL revision. Also, during early RTL development new functionality may be added to the design, and based on the power budget for each block the RTL designer might want to reduce power at the same time.”

Therefore, consistency of power estimation is more important than accuracy of power estimation, he said. Power estimation at the early stage also helps to explore packaging decisions and refine the power constraints (UPF). Power estimation accuracy is needed for late-stage RTL (near RTL freeze) or the implementation stage in order to know the impact of heat, IR drop and signal integrity analysis.

When accuracy is needed, there are a number of tools available to determine, capture and represent the real hardware and software workloads in order to achieve the best accuracy. Joshi said emulation can be used to generate power vectors representing system-level real use-case scenarios with long emulation traces, like a user playing a video game on mobile device. “Typically, gate-level power analysis using these power vectors, with clear power intent and fully back-annotated parasitic information, would enable accurate power estimation.”

Accurate power analysis also is needed for systems with very small and limited power budget (µW to nW), such as IoT nodes that utilize energy harvesting, said Björn Zeugmann, a member of the Integrated Sensor Electronics research group at Fraunhofer’s Engineering of Adaptive Systems Division. “Here, for example, very small leakage currents multiplied by the number of devices can cause significant power loss, even in idle mode. Only with an accurate power estimation can an ultra-low power system be designed securely. It’s necessary to care about the worst case, which means the highest current flowing, to construct structures resilient. Since this case will in most applications not be the most present use case over time, a realistic use case for simulation is needed to get a realistic power analysis. This real case can, for example, be obtained from mission profiles of previous work and is needed for estimations of live time in systems with battery cells, for example. But an accurate power analysis is not important in system designs in which power consumption, self-heating and other relating effects play a secondary role.”

In addition, accurate in-chip monitoring can be key to implementing enhanced die optimization. “The advanced node engineering community is well aware of the relationship between power consumption and supply voltage of CMOS logic but being able to reduce the supply by even a few percent based on that particular die’s process point, combined with the environmental conditions that allow, will result in considerable power savings. The same is true with performance if a given clock speed can be met with a lower supply. However, none of this is possible if the monitors are not accurate,” noted Ramsay Allen, vice president of marketing at Moortec.

Embedded in-chip monitoring is an essential part of the system design, he stressed, in that the costs of advanced node technologies are continuing to increase. “We are already starting to see a fragmentation with the really advanced nodes becoming more niche for those devices which really need the power and performance, such as AI, ML, 5G and data center high performance computing. For these bleeding edge nodes, performance optimization, power and reliability will need be an integral part of the architecture to ensure the cost of such expensive technologies is minimized.”

For particular applications, such as blockchain, AI and IoT, Allen said core supplies being driven extremely low. “This creates a vulnerability for the logic and its operation. In particular a vulnerability to dynamic IR drop and transient events on the supply. To squeeze power performance, knowing actual core supply levels (accuracy) at a particular moment in time (latency) is becoming more important for power management schemes.”

Assessing power in context
Power analysis is not a single shot analysis these days. It spans the entire design cycle, but not equally.

“If you don’t really want to go to the system level, even at the chip level, you need to consider that from the early days of the architecture,” said Cadence’s Zhao. “That’s why there are low power design methodologies available.”

From the very early stages, emulation can be used to analyze power itself, he said. “It can analyze that piece of architecture, whether it can consume a lot of peak power, and where peak power comes to a peak and under which function. Those are the things that a design team needs to consider in terms of power. Then, when you get into the silicon level front end designs, that’s where high-level RTL power synthesis comes into play to make sure that from there the power is well planned out. Once you get into the implementation (place-and-route), this is where a lot of low power methodology plays out, cutting down the leakage power, turning off the clock, sometimes turning off the power grids. All those technologies combined give a way to reduce the power at the implementation stage. After implementation, the design is fully placed and routed and then a power signoff run is performed.

Finally, at the system level, power is even more critical because there are more components that can affect the overall power and thermal budget, particularly when more components are being stressed by always-on functions or intensive processing, such as in AI or machine learning chips, or even in future 5G chips.

“We have virtual prototyping type of solution in conjunction with simulation solutions,” said Peter Zhang, R&D manager in Synopsys’ Solutions Group. “When somebody is doing the architecture design of the SoC, he can put different modules together and run a power analysis early on, because the power consumption that is one of the final requirements a lot of times is ignored by the architect who included all the blocks. Tools like this will remind them from day one that the way that they put down the blocks together, the way they define the data wave between the data rate between the blocks, has ramifications at the end. Somebody is going to pay for the power.”

This becomes apparent with 5G and AI systems, where complex algorithms are a necessary part of the design.

“When you design a more complex algorithm, no doubt the power it will consume is more,” said Zhang. “When you have all this activity, something has to power all those things. It will take more power. But when you design the architecture, you can make a certain tradeoff from a technical point of view. Mathematicians like to write formulas. When you write formulas, you get perfect result, but when you realize this in the real world with noises and everything in the wireless world with 5G, for example, there is a level of precision that really doesn’t buy you anything except you are just paying dollars for the electricity. So that’s a tradeoff the system architect and algorithm designers can start to do.”

Cadence’s Zhao agrees. “If you think about a cell phone, in that small case, how many devices it holds and how many PCB boards it has — all of those are consuming power one way or the other. When they pack power in that case, there isn’t much thermal escaping. It’s basically trapped, and that power consumption is a dangerous thing to some of those components that may have degraded performance, malfunctions, and sometimes even long term reliability issues because it’s too hot for too long. All of those design items need to be considered at the system level as well. You need to have electrical and thermal co-simulation, you need to have electrical power, you have thermal power from the same source, but how are you going to analyze the temperature distribution inside the casing of the cell phone, and where it’s too hot?”

Those are the power issues that extend way beyond just one piece of silicon, and which require some intelligent system-level type of planning in the early stages, as well.

Related
Raising The Abstraction Level For Power
Finding the right abstraction for power analysis and optimization comes from tool integration.
Using Less Power At The Same Node
When going to a smaller node is no longer an option, how do you get better power performance? Several techniques are possible.
Why Chips Die
Semiconductor devices face many hazards before and after manufacturing that can cause them to fail prematurely.



Leave a Reply


(Note: This name will be displayed publicly)