New Approaches To Low Power Design

There is work to be done in energy-efficient architectures, power modeling and near-threshold computing, but there are many more options available today.

popularity

While Moore’s Law continues to drive feature size reduction and complexity, a whole separate part of the industry is growing up around vertical markets in the IoT. While these two worlds may be different in many respects, they share one thing in common—low power design is critical to success.

How engineering teams minimize power in each of these markets, and even within the same market, can be very different. At least some of this world will be powered by energy harvesting for ultra low power sensor networks, and MCUs in a “collect” function. Some of the focus is on energy-efficient data center CPUs in a “correlate” function, as well.

Much of this is a testament to the creativity of engineers behind these technologies, with some of the biggest sources of creativity showing up in RTL and system-level innovations in ultra-low-power flows. That includes physical and timing-aware RTL models and best practices methodology for better power analysis and predictable RTL-gate accuracy so that RTL designers can make design decisions with confidence, said Vic Kulkarni, vice president and general manager of Ansys.

There are a number of techniques and tools that can be used to achieve these ends, including:

• Computing clock-gating, using clock-gating efficiency metrics;
• Block level and architectural power reduction guidance;
• More efficient root-cause analysis of hot spots at RTL;
• Guided power reduction, and
• Computing of RTL power profiles for real-life application scenarios, such as OS and firmware boot-up for 4K video streams.

Krishna Balachandran, product management director at Cadence, said there is an explosion of creativity showing in all aspects of low power design, starting at the system level down and stretching all the way to the gate level.

“Engineering teams are trying to save power with architectural techniques, employing different kinds of specialized IP, and they’re also employing EDA tools that can predict the power up front so that they can massage it, following that with EDA tools that can optimize the power,” Balachandran said. “They are creative in the way they create libraries of cells that might use lower power, so there are some custom design cells at a standard cell level. Finally, there’s also the process node techniques, not so much coming from the user side but from the foundries. The foundries are specializing in reducing leakage power with things like SOI, and are back-porting some of the established process nodes because many IoT devices are using those nodes (40 or 65nm nodes). There are foundry offerings that give a low leakage and low power process, which is yet another aspect of the creativity to reduce the power.”

He noted that in the past, there was not a lot of focus on reducing power to devices that were connected to a power supply. An appliance, for instance, could get away with consuming a fair amount of power and that was still considered okay. With the advent of smart energy design coming into the picture [for more on the topic of phantom power, see “Chasing After Phantom Power“] and European countries embracing standards that mandate extremely low power consumption, that will become much more important.

Not a simple decision
In terms of how design teams approach low-power/power-aware design today, Alan Gibbons, power architect at Synopsys, noted that selecting and implementing power management strategies and low power design techniques always involves a series of tradeoffs, whether that be performance, area, design schedule, effort, cost or risk.

“Gaining energy efficiency always comes at a price, and the design team’s challenge is to determine what level of energy efficiency is affordable. For example, implementing a full dynamic voltage and frequency scaling (DVFS) solution for a platform can yield considerable energy savings, but it is a non-trivial system design problem and typically requires the integration of complex software components for workload monitoring and prediction, high-end voltage regulation, elaborate low power implementation including intelligent partitioning of logic to optimize DVFS returns (voltage headroom for logic versus memory, for example) as well as a rigorous many-corner sign-off methodology. For this level of effort, design teams need to be fairly certain of returns they will see in terms of energy efficiency for the silicon process they are targeting,” Gibbons explained.

This brings up the topic of power characterization and the challenge associated with understanding exactly how much power (energy) is consumed by a piece of IP executing a set of tasks. “How a piece of silicon is used absolutely determines its energy consumption, and if we wish to invest the time, money (and take the associated risk) to employ elaborate system power management strategies and policies, then we must first have a good idea of how that piece of IP behaves (from a power perspective) when integrated into a platform running under a full or representative workload,” Gibbons continued.

Similar challenges are associated with shutdown (RFTS) and bias approaches to system power management, he said. There must be good and accurate data to make decisions that in many cases are as strategically important as they are technically important.

Further, the engineering team may decide that the cost and risk of employing certain styles of power management simply isn’t worth it because of the uncertainty in the way in which the silicon will be used. “Some silicon processes suit certain power management techniques better than others, and so specific approaches and techniques can be dismissed early in the design flow,” Gibbons said. “Schedule pressures continue to dominate most leading-edge designs these days, so it is not uncommon to hear design teams taking a ‘good enough for now’ approach to power management, where techniques are used to deliver a satisfactory power number without being as good as it can be in order to save precious schedule time and mitigate risk.”

The next big things in low power design
While much is understood, there is still much left to work out where power is concerned.

Gibbons pointed to a number of areas of opportunity for automation and IP development, which already are being used in a number of advanced hand-crafted designs today:

Energy-efficient architecture development. There are very good solutions in place for low power IP physical implementation, and the focus is now shifting to the exploration of various architectures to get the best balance between power and performance in a thermally throttled environment, said Gibbons. At the same time, selecting an energy-efficient architecture that is right for the job is a complex process. It requires power abstractions for IP components that balance  simulation speed (abstraction/detail) and accuracy (mW). Power models are needed to provide accuracy that is ‘as good as it needs to be’ to make intelligent architectural trade-offs. Also, the power-performance-thermal challenge is best solved in system-level design where there are degrees of freedom necessary to make these intelligent decisions from power data fit for the task.

Software is king for energy consumption. The way in which a platform is used determines its energy consumption, so recognizing that software is king when it comes to energy efficiency is important. No matter how energy-efficient the hardware, if the software does not use it correctly then time has been wasted. Software teams must be educated as to their importance in developing energy-efficient platforms and they must be provided with the tools and methodologies to be able to deliver energy efficient software.

Margin. There is still a considerable amount of energy wasted in trying to meet worst-case silicon corners because of overly pessimistic margins in library data. While the ability to tune energy efficiency by removing much of this margin could deliver considerable energy savings, true commercial success of the various margin alleviation techniques being touted over the last few years has yet to be seen.

Near-threshold computing. As this design paradigm is getting more and more attention, commercial silicon is starting to employ it, particularly for applications where performance is less important than battery life or overall system power.

From Balachandran’s point of view, one of the next big power issues to address involves mixed signal designs. As the amount of digital content in those designs increases, they are becoming more optimized for power.

“Low power is becoming synonymous with mixed signal, and vice versa, because of the type of application in that space like wearables or IoT devices, many of which are mixed-signal devices,” he said. “We are seeing engineering teams, which were not digitally astute in their design flows, now talking about how to reduce power on the digital side. And they are employing more and more speciality EDA tools to figure out what the power will be and how to further reduce it.”

Balachandran pointed to cases of engineering groups playing around with the voltage aspect of mixed-signal design. “Instead of operating the transistor in the normal mode of operation at Vdd, they are reducing the voltage a lot, and trying to operate it in the knee of the transistor curve. In the knee of the curve, what happens is your dynamic power goes down a lot but your leakage power goes up a lot? Now you have to be very creative about how you design your circuits. You have to design them in a very robust fashion and the rules of design change upside down in this context.”

In the voltage realm, Kulkarni noted that the sensitivity of FOM (figure of merit) delay increases dramatically below 700mV operation and moving toward 14nm technology nodes, and reduced noise margins in tens of mV and higher dynamic noise (di/dt) now pose significant design challenges above 2GHz operating frequency. “Innovative techniques will be needed to operate reliably at extremely low power drain. The presence of very low duty cycles for a CPU away from data centers presents an opportunity to reduce power, such as switching gates off when they are not being used. However, innovative design techniques may be needed to keep the low-power mode that retains data, both in state machines and in memory.”

Like Gibbons, he highlighted near- and below-threshold computing as areas of opportunity. “MOSFETs operating near threshold are going to take longer to drive their loads and may cause timing failure because there are non-linear waveforms in near-threshold devices. Timing tools today assume that delay is RC-dominated, but at near-threshold voltages, gate delay dominates timing. In addition, the sub-threshold circuits are very sensitive to changes in Vth, and a tiny drift caused by bias-temperature instability can cause a logic failure. Innovative techniques will be needed to design physical cell libraries and IP with three to four stages, especially those with transmission gates, which tend to have problems using near- threshold supply voltages. When Vdd goes below threshold, both dynamic and static power goes down. However, there is a minimum operating point at about 200 mV for  energy harvesting applications, below which the circuits stop working.

Kulkarni pointed to recent innovation in ultra-low power MPUs from Ambiq Micro. “Rather than using transistors that are turned all the way ‘on,’ subthreshold circuits use the leakage of ‘off’ transistors to compute in both the digital and analog domains. With most computations handled by using only leakage current, total system power consumption can be dramatically reduced.’

What’s still on the table?
Power analysis and reduction techniques for ultra low power flows today rely on the designer’s verification test-bench or on unrealistic, vector-less techniques, so being able to analyze power consumption at the RT level of real-life scenarios such as OS and firmware boot-up, and 4K video streams using emulator-driven streaming waveforms, has been an important first step toward managing and reducing power.

At the same time, Kulkarni said there is a lot of innovation in coming up with power-specific test-benches to analyze and reduce power for complex SoCs, which are more realistic for device operation in the field and go beyond designers’ verification test-benches.

While low-power design gained its footing in the mobile market, due to limited battery life and thermal issues, it is reaching into many more places these days. The explosion in creativity and new techniques and tools is a testament to that, and most experts believe this trend will only become more pervasive in the future.



1 comments

Kamran H says:

Good article! The permutations of LP design for a mixed signal SOC are quite large. One way I see overcoming those challenges is having a common (or converged) verification solution against a golden reference to keep the whole flow in check to prevent any bug escapes.

Leave a Reply


(Note: This name will be displayed publicly)