Designing For Ultra-Low-Power IoT Devices

Battery-powered designs require complex optimizations for power in the context of area, performance and functionality.

popularity

Optimizing designs for power is becoming the top design challenge in battery-driven IoT devices, boxed in by a combination of requirements such as low cost, minimum performance and functionality, as well as the need for at least some of the circuits to be always on.

Power optimization is growing even more complicated as AI inferencing moves from the data center to the edge. Even simple sensors or sensor hubs are now required to pre-process large quantities of data quickly to extract the valuable data and send it to the cloud or other processors. The upshot is an increasing emphasis on power optimization and reduction techniques and complex design tradeoffs that even a year ago never would have been considered in an IoT device.

One of the big challenges is understanding the target market, use cases and working environment to avoid over-engineering a solution, which will add cost, design time and unnecessary complexity. But that also means chips need to be designed in the context of a specific applications, and that requires a much closer working relationship among the people who develop ICs, software, end devices and potentially even the systems they will connect to.

“The first task is to understand the environment in which the system will operate,” said Rhonda Dirvin, senior director, marketing programs in the Embedded and Automotive Line of Business at Arm. “For example, is the system always going to run on batteries or is there the opportunity for additional injections of power through wall chargers, solar or kinetic charging? Next, a full understanding of the use cases will help identify opportunities to either save power or potentially harvest power at various times. For example, in the case of a drone, can some of the kinetic energy produced by the propellers be used to charge the battery? For wearables, can the screen be turned off when not being viewed? For a mobile device, can the WiFi be turned off when the system does not rely on connectivity to the network?”

For years, power experts have been talking about the need to incorporate low power into the earliest phases of the design. Large chipmakers are finding that is now a prerequisite to getting a design to work. In fact, Hooman Moshar, vice president of engineering for Broadcom‘s broadband carrier business unit, called power “the nemesis” of designers.

Moshar pointed to three walls that design teams faced in developing chips. One is the area wall, which is tied to process shrinks, where steep mask and wafer costs have been the main problems. The second is a speed wall, where parasitics begin entering the picture to limit improvements in performance. The third is the power wall, where integration of various components is outpacing power reduction.

“These are all interdependent problems,” Moshar said, pointing to such factors as power variation in new process nodes, subthreshold leakage, dynamic power density and a new emphasis on reducing voltage to improve battery life. “Successful design for power always starts with the architecture, which means the power budget, which needs to be predicted with accuracy and signoffed at each stage of the design.”

It also requires careful choice of all of the components that need to be integrated into the system.

“A modern processor, be it central processing, graphics or AI, should support a variety of low-power modes,”said Arm’s Dirvin. “Where possible, the main processing blocks within the system should be tuned to ensure they provide the correct amount of compute for the specific task at hand. It’s not just a matter of picking an architecture with an array of low-power modes. A vibrant ecosystem that provides a range of tools, debuggers, compilers, drivers and operating systems will help ensure the hardware is being managed correctly to achieve these power savings. Finally, think about the entire system, not just the main processing elements. Total system power, not just chip power, needs to be considered. Choose low-power peripherals that will complement your energy-efficient compute selection.”

That’s one piece of the puzzle. There are different techniques and challenges unique to these types of applications, and it’s not always exactly what people expect.

“When you ask people if they care about power, everyone says yes, but it means completely different things to different people, even if it’s really important to all of them,” said Dave Pursley, product management director for Cadence‘s Digital & Signoff Group. “In the case of ‘other devices,’ people almost always want to minimize power, but they want to minimize power within the constraints of everything else—within the constraints of area, within the constraints of performance, within the constraints of the functionality.”

Also, when designing for truly ultra-low-power applications, the best optimization by far is starting with the right architecture.“For engineering teams designing always-on, ultra-low-power, battery-powered devices, a lot of time is spent trying to get that architecture correct because they know that up to 80% of the power optimization is locked down by the time the RTL is coded,” Pursley said. “As such, if you’re trying to use all those techniques that the ‘other devices’ engineers are using, you absolutely need to use those. But those aren’t going to move the needle. They are absolutely required. They’re necessary, but not sufficient. You really need to spend some time doing architectural analysis and exploration.”

Still, architectural analysis can be really difficult when it involves low power because most designers don’t have the same intuitive feel for power-related implications of decisions as when they’re optimizing for area or performance. “This is likely due to the fact that they haven’t been primarily focused on power as often,” he said. “Also, power is dynamic, so it depends on what the input stimulus is. You could make some decisions based on one input stimulus, and it could even be a perfect decision. But then, when you get the actual application doing the actual stimulus (the actual vectors), it can be different. A prototypical example of that is a chip that’s optimized for power, and the way they optimize it is they did the power analysis by booting Linux. The problem there is everything that’s happening when booting Linux is completely different from what happens when running Linux with the actual end application. My advice to these types of engineers is to make sure they spend the time up front analyzing and figuring out their power optimization.”

The consensus is widespread on that point, both from tools vendors and end users. Annapoorna Krishnaswamy, product marketing manager at Ansys, stressed that a RTL-to-GDS design for power methodology is essential for providing early insights at the RTL stage for power estimation, analysis-driven power reduction capabilities and RTL-based power grid integrity planning. And all of those are required to make cost-effective choices for packaging that can meet power and thermal requirements. 

Source: Semiengineering.com

“Also, the power delivery network (PDN) is a completely connected network across the chip, package and board and must work reliably across all real use-case scenarios,” Krishnaswamy said. “Comprehensive workflows to address power, signal and thermal integrity issues across the spectrum of chip, package and board can help avoid over design that leads to cost or under design that leads to product failure and help accelerate product success.”

Reducing margin
One of the key ways that engineers have dealt with power in the past is to add more margin into designs. That margin involves both area and circuitry, however, and as devices become more complex and heterogeneous, it begins to take a toll on power and performance as well adding to the cost.

“The biggest problem with over design is knowing how much you’ve over designed,” said Oliver King, CTO at Moortec Semiconductor. “Especially on the most advance nodes, nobody knows what a finFET aging model really does, but real-time monitoring of those things can provide insights. If we can say, in mission mode, this is how much something has degraded, then it gives the ability to react. It’s not as good as having guaranteed lifetime models, but no one has that today, so we’re having to work around it.”

This is particularly evident in designs involving AI, machine learning and deep learning, especially at the edge.

“With those applications, there is a very strong desire to push power, and by that, reduce supply voltages and operate things closer to the edge,” said King. “That’s what it really boils down to. There is margin there, and engineering teams want to get to the point where the margin is almost gone—or in some cases it might have gone away and they have to back up a little bit. The basic premise is that the design margins are there basically because we don’t know how much the process is varying by, or more truthfully, what speed a particular chip is at. As a result, you design the chip to work over a bunch of margins and PVT corners, as they would have been called, although that’s probably a simplification now because ‘P’ is a multiple corner set. If you can measure that, potentially you can effectively optimize the supply voltages, which is really what you need to do. If your throughput is a known target, you can adjust the voltage to get that throughput. And if your chip is fast, you need less supply than if it is slow so you can effectively push the mall into the middle, which gives you a narrower spread of performance, which is really what most people want.”

There are other technical considerations when designing devices for battery-powered, ultra-low-power IoT devices, according to Torsten Reich, group manager Integrated Sensor Electronics at Fraunhofer EAS. Among them:

  • The choice of best-suitable semiconductor technology (driven by commercial aspects, by the ratio analog/digital components and by the use case-driven power budget).
  • Sub-threshold design.
  • Transferring established techniques from digital to analog components, such as voltage scaling, clock-gating, duty-cycling and wake-up techniques.
  • Time-to-digital approach for conversion.
  • Switched circuitry approaches.
  • Ultra-low-power optimized local and global power management.
  • Sophisticated DC-DC converters as enablers.
  • ULP-optimized AMS design flows, from model to layout.

New verification approaches
In a semiconductor world increasingly populated by heterogeneous systems and standard platforms, it is likely that some IoT applications will use large chips but power up only what is needed. Further, some IoT platforms will contain a wide array of sensors and sensor logic, only a few of which might be used for a given application.

This makes it critical to verify that only acceptable power modes can be configured during chip operation, noted Tom Anderson, technical marketing consultant at OneSpin Solutions.

“Powering up more portions of a device than planned could exceed battery limits, compromise cooling methods, or permanently damage the silicon,” Anderson said. “Only the exhaustive analysis of formal verification can prove that power controllers and related logic will stay within acceptable operating modes. Thus power, in addition to safety and security concerns, is driving wider adoption of formal technologies for verification of IoT designs.”

To be sure, the industry is now seeing some pretty extreme power requirements when it comes to remote IoT devices. Case in point: Dave Kelf, chief marketing officer at Breker Verification Systems, pointed to recent work with a user involving a design where the battery had to last the life of the product, which was approximately 10 years on a device performing some hefty signal processing and wireless communication.

“One of the biggest problems they had was measuring the potential power consumption prior to fabrication,” Kelf said. “The only sensible method was to raise the test abstraction to a point where they could statistically manipulate system-level parameters, and then perform detailed profiling of the design operation while running those tests. New verification techniques have to be applied for these purposes, and it’s one where emulation and portable stimulus testbenches play well.”

Conclusion
Creating low-power designs has never been easy, and as a result power typically was the component that was given the least amount of attention. But as tolerances shrink along with process geometries, and as the number of heterogeneous elements in a design continues to expand, power has emerged as a key part of the design, and in many cases the most important one.

This is particularly true for designs at the edge, and for anything tied to a battery, because the performance demands on all of those devices are going up even as power remains constant or goes down. That also makes it increasingly difficult to design these devices, and it likely will become even harder as more compute requirements are added on the edge.



1 comments

Farzad says:

Great article

Leave a Reply


(Note: This name will be displayed publicly)