Power Optimization Strategies Widen

Different markets are heading in different directions, raising questions about whether the chip industry can effectively respond to all of those demands.


An increasing amount of electronic content in new and existing markets is creating different and sometimes competing demands for power optimization.

For the past decade, EDA has been driven by the mobile phone industry, where the emphasis is on better power analysis and optimization tools to reduce power consumption and extend battery life. While energy efficiency continues to improve, other industries are adding different pressures and have different concerns.

The plethora of power optimization strategies and the industries that are using them was one of the items surveyed by Synopsys during the SNUG 2016 event (see Figure 1.)

Fig. 1: Low Power Complexity Growth. Source: Synopsys – SNUG 2016

Figure 1 shows that each industry looks at the problem slightly differently. “The microprocessor point of view is that power and frequency are two sides of the same coin,” says Rob Knoth, product management director in the Digital & Signoff Group at Cadence. “You can modify one to achieve something in the other. When you look at other application areas, it becomes more nuanced. However, nobody wants to waste power. It is a matter of how much it costs to waste it.”

Different application areas have different reasons to optimize for power. “There are two categories of power optimization based on the power supply architecture of the application — power-constrained or energy-constrained,” explains Srikanth Rengarajan, vice president of products and business development for Austemper Design Systems. “A mobile device is different than a server. Power-constrained environments tend to play off one application unit versus the other with the overall goal of optimizing performance. Their hard limit is maximum power dissipation driven either by supply constraints or, more likely, thermal limits.”

Energy constrained systems rarely have supply or thermal problem. “IoT developers are not worried about thermal management,” says Jeff Miller, product manager for Tanner products within Mentor, a Siemens Business. “If you consume enough power to make the device hot, your battery is already dead.”

Some power optimization techniques are universal. “If a portion of the logic is busy but the output is not being used, no matter if it is IoT or high-performance computing, eliminating that activity will help to improve power consumption,” explains Preeti Gupta, director of product management at ANSYS. “As a result, thermal concerns go down, you may be able to use a less expensive package, you may have a more economical power grid. There are repercussions. What changes is how much a particular application cares about power and what are they willing to do to address it.”

Knoth agrees. “Everyone does a certain amount of power optimization, things like Vt optimization or Vt recovery prior to signoff timing, and multi Vdd approaches are become more standard. Beyond those common approaches, you have to start looking at individual markets.”

IoT pressures and opportunities
IoT is becoming the most aggressive industry for power and energy optimization, and it does have some characteristics that make power management easier. “For IoT, the aim is to minimize power usage while maximizing battery longevity and also communication range,” says Simon Forrest, director of connectivity and connected home at Imagination Technologies. “An IoT device may only need to send a few kilobytes of data infrequently, so the SoC designer seeks radio technology that can quickly switch on, communicate the required data over an IoT protocol, then drop back into a very low-power sleep mode to conserve battery life.”

These devices are usually smaller than a mobile phone, incorporating several sensors plus analog components with a smaller amount of digital circuitry. And they often are developed at larger process nodes. “IoT edge devices often select foundry technologies that are larger (around 40nm), or devices that are using SOI technologies, which provide the benefit of optimizing back bias, where the expected junction temperature ranges can be hovering around room temperature,” says Oliver King, CTO for Moortec. “This means that the applications may require the device to only operate only between 0°C and 70°C, especially if the IoT application is for wearables, battery cell sourced sensors and other low power applications.”

One common strategy is to reduce voltage. “A number of microcontroller vendors are looking at operating in the sub-threshold area, which means operating at far lower voltages,” points out Drew Wingard, CTO for Sonics. “You can drastically slow down the clocks and reduce the voltage, so you get the same amount of work done at lower energy. Leakage also gets better at lower voltage. The challenge is that it requires a lot of characterization that you don’t get for free. It also increases other design concerns, such as noise margins. When running at 0.6V, how much noise margin do you have? How much attention do you have to pay to the stability of the supply?”

Magdy Abadir, vice president of marketing for Helic, agrees. “Power optimization can lead to operating at low voltage and signals are operating with very thin margins for error. Small deviations caused by electromagnetic crosstalk can lead to unexpected silicon failure or loss of yield. Reliability and aging effects can erode the margins further, and hence electromagnetic crosstalk effects can cause chip failures.”

Sleep mode is key for these devices. “IoT devices are even more power aware than mobile, with deep sleep modes,” points out Stephen Crosher, CEO of Moortec. “In addition, low voltage operation is also a typical requirement. As a result, there are new IPs being designed to support this growing market.”

IoT also adds a number of challenges. “It is always a challenge when you deploy a new technology and do not own the ecosystem,” says Mentor’s Miller. “There are advantages to being vertically integrated. Systems companies, who are deploying IoT edge devices, typically buy stuff off the shelf and put it together. So they buy a microcontroller and sensors and radio and integrate them on a PCB.”

This can create problems, however. “When you have a food chain of developers from the chip guy who builds reference software, a product or board-level designer and then system integrators who deploy and activate the system, you have issues that have to be dealt with,” says Kevin McDermott, vice president of marketing for Imperas. “For example, a service provider may embellish or add onto software that was given to them. If someone makes a change, there is a ripple effect that is multi-dimensional.”

McDermott provides an example where users are prohibited from making changes in a hardware/software module. “Consider FCC regulations related to WiFi routers. Open source is great, but you have to certify the box to be compliant with the radio specifications. If you are able to manipulate the software and violate the spectrum allocations or the power levels, the FCC gets worried. Firewalls are created between different systems so that you can modify some software, but unintended consequences can be avoided. You have to ensure that you don’t force the system into an abnormal behavior that puts it out of spec or consumes more power than is reasonable.”

The desire for lower power is driving many systems companies to develop their own IoT edge devices. “People have discovered that they can get much lower power numbers by doing a custom chip rather than piecing it together at the board level,” says Miller. “We have seen up to 70% reductions in power. That has a lot to do with having features that you don’t use or need, since they add complexity and power consumption. This is another example of hardware putting in a feature that software doesn’t use, and so it doesn’t do any good and you are probably paying in power and cost for those features.”

Other concerns
For servers and networking, power concerns and tradeoffs are very different. “They tend to take down entire boards,” says Wingard. “So chip-level power management doesn’t matter very much. While they do power cycle to match their load, they do it at a much coarser grain. Servers are interested in techniques that help them to minimize dynamic power. They want to achieve a certain throughput, so frequency is the desired control. And they want to be as close to energy that is optimum for that frequency, which means the lowest voltage that allows them to achieve it. That voltage is temperature-dependent, and also depends on if you finished up with fast transistors or slow transistors on the wafer.”

This is one place where having on-board sensors can allow you to get closer to the optimum.

The mobile and automotive industries have hybrid concerns. Some parts look more like IoT and others like servers. “While power is important to them, the optimization process is very different,” says Miller. “Mobile designs are worried about leakage, but they are also worried about dynamic power and providing a user experience that is interactive, while staying within a comfortable thermal envelope. When designing an application processor for a cell phone, they are balancing performance against leakage against dynamic consumption against thermal management.”

The same is true for automotive. “A tire pressure monitoring system sensor that is running inside the tire has no opportunity to get power from the battery,” says Miller. “The device has to last the lifetime of the tire on one charge. They look a lot like IoT devices. At the other extreme, the proximity sensors mounted in the bumper would have a very different approach to power management because they are on at particular times, such as when in reverse, and they can take advantage of the main battery.”

Many components of a car are modular in this fashion. “Idle power is not a problem for modular pieces because either the device is active or off,” says Benoit de Lescure, director of application engineering for ArterisIP. “An example would be complex SoC to do autopilot of the car. When it is active, you can be sure that all of the silicon on the SoC is going to be active. An image will come at a particular rate, there will be no idle periods where power can be saved. Developers will attempt to work at the algorithmic level to minimize the transfer of data between the external memory, or by adding a cache you reduce the amount of time you spend going to an external DRAM. This saves on power.”

With the increasing amount of electronics in the car, power is a growing concern. “While the battery is charged from the alternator, there is a tsunami of additional electronics now being added into vehicles,” says Imagination’s Forrest. “Infotainment systems, digital dashboards and ADAS systems are becoming commonplace and there is a limit to the power you can draw overall. Therefore, SoC efficiency will remain a primary consideration for the semiconductor designers.”

This is not the only additional consideration. “Automotive can have much higher operating temperatures,” points out Moortec’s Crosher. “Robust, higher temperature-tolerant designs are required to support the varying expectations of junction temperature for different automotive grades (grade 0, grade 1, etc.) Today, foundries using very advanced nodes are now supporting automotive customers. That compounds the problem associated with higher operating temperature, increased tracking density and higher current densities, which can in turn lead to higher self-heating.”

Concerns for power have led to some interesting developments. “Multi-bit flip-flops are an interesting case,” says Knoth. “It started in mobile and is now being seen in other very low-power designs. By combining multiple flip-flops, you reduce the loading on the clock network which lowers area and the amount of buffering required. It ripples throughout the design flow, including synthesis, and place and route. If you do the design with individual point tools or pieces, you will never be able to leverage this. The whole ecosystem has to be integrated to get the full benefit.”

Trusting software
Debate is growing over how much reliance should be placed on software for power management. Forgetting to turn something off can waste all of the power saved through optimization.

“Software people are struggling to use all of the features that the hardware team puts in there,” says ArterisIP’s de Lescure. “It applies to industries where chips are changing very often, such as mobile. They develop new chips at a fast pace and there is little time to optimize the hardware/software interface above and beyond what has been defined by the standards. It is a bit different if you are looking at a market such as automotive, where the pace of chip development is slower and they have additional regulations and constraints for safety. In that market you can expect people to spend more time optimizing the software to make use of the hardware features to save on power.”

Software often is developed with the assumption that it can be patched later. “It is not like the classic embedded, where you design and debug and then ship millions and you never touch them again,” says McDermott. “With the connectivity that devices now have, you have the ability to patch, update and feedback real-world experiences. The military idiom applies: No plan survives first contact with the enemy. When you build a plan and you think you are going to deploy something, nobody has tested the device in that environment before. As you deploy, you are learning about the environment and what is going on.”

This capability has proven critical in some high profile cases. “It allows you to identify issues, such as the premature aging of the device, and to stop it producing damaging behavior,” says Miller. “The iPhone is a good example of that. By down-clocking, they prevented a brown-out to the whole system that would have caused it to reboot. It is good that they had the ability to do that in software and could send a patch over the air to change the way in which the system performed from what they had learned in the field.”

Related Stories
Turning Down The Voltage
SoC complexity is making it more difficult to combine functional performance with demands for lower power.
Processing Moves To The Edge
Definitions vary by market and by vendor, but an explosion of data requires more processing to be done locally.
IP And Power
How can power be optimized across an entire chip when most of a chip’s content comes from third-party IP?
Closing The Loop On Power Optimization
Minimizing power consumption for a given amount of work is a complex problem that spans many aspects of the design flow. How close can we get to achieving the optimum?

Leave a Reply