Always-On, Ultra-Low-Power Design Gains Traction

Powering down most of a system is good for conserving batteries, but leaving some circuits on adds some complex challenges for design teams.

popularity

A surge of electronic devices powered by batteries, combined with ever-increasing demand for more features, intelligence, and performance, is putting a premium on chip designs that require much lower power. This is especially true for always-on circuits, which are being added into AR/VR, automotive applications with over-the-air updates, security cameras, drones, and robotics.

Also known as power-managed architectures, these designs include circuitry that is either powered down or off. In the case of IoT designs and microcontrollers, this could include up to 10 managed power domains. Cell phone SoCs may have hundreds of power managed areas. And in server designs with independently power-managed cores, SoCs may contain as many as 128 compute engines.

According to Godwin Maben, a fellow at Synopsys, there are two classes of always-on (AON) architectures in low power/ultra low power devices: mobile and IoT edge devices, and ultra low power mission-critical devices. “In mobile and IoT edge devices, the goal is to minimize the AON logic since it impacts battery life, so the key challenge is to identify what needs to remain always-on (alive). To accomplish this, designers have come up with ‘fully alive,’ ‘partially alive,’ ‘semi alive,’ and ‘shut-off’ architectures. Key logic components here include modems (for signal polling), PLLs, and clock logic needed to remain always-on. In many cases retention registers are used to achieve this.”

Then, in ultra low power mission critical devices such as medical devices/pacemakers, emergency alarms, and security components of chips, Maben noted these must be always on, but still have to be very power sensitive since the batteries are expected to last a very long time. “To achieve this, architects operate at very low voltages and divide the AON architectures in ‘near VT voltage,’ ‘ultra low voltage,’ ‘low voltage,’ ‘medium voltage,’ among other schemes. This is similar to a shut-down/power gated architecture, except voltage is controlled in a way that, the devices remains alive and retains values even when not in active use.”

In other words, ‘always-on’ is a very rudimentary form of computation that is running constantly on the device, noted Amol Borkar, director of product management and marketing, Tensilica Vision and AI DSPs at Cadence. “For example, typically when your mobile phone is sitting on a table, the display is off. But when you pick it up and hold it at a certain angle, it assumes that someone is probably looking at the phone. That’s why the screen comes on. That is an example of an always-on application. There is some very low power computation that is constantly running on the phone, and when it triggers to indicate you need to power up the larger devices like the CPU or the AI engines, only then does it wake them up.”

In the past, this wake up was done with very basic processing, like a small gyroscope that looked at the phone’s orientation, because the user can’t be looking at the phone at all times. “If you turn it to a certain angle, the display comes on,” Borkar said. “Also, the phone display goes off when you hold it near your ear because there’s a very basic infrared camera that just sees some level of proximity, then figures out when to turn on or turn off the display. That was the start of always-on.”

More recently there has been growing interest in adding AI to make devices even smarter. “When you add AI, it typically means there’s more compute going into the device, which leads to more power consumption. So there is work being done to determine how to still operate in that low power or ultra-low power domain,” he said.

Omar Cruz, product marketing manager at Synopsys, agreed. “Neural networking implementations are becoming a trend in always-on systems. Voice commands or always-watching to support system wake-up by means of a face trigger are now employing machine learning techniques for recognizing these voice commands, faces, etc. While implementing such functions in a very low power consumption envelope is still a key requirement, now your always-on processor needs to be able to facilitate machine learning inference requirements as well.”


Fig. 1: What stays on all the time vs. what gets powered down. Source: Synopsys

Designing these devices is easier said than done. Ultra-low power devices have complex challenges that need to be dealt with early in the design flow.

“Structurally, power-managed designs need to make sure that all signals that are turned off are properly isolated,” said Rick Koster, senior design specialist for Siemens EDA. “In multi-voltage designs, signals going between voltages need to be properly level-shifted. Functionally, verification processes need to make sure the design can get in and out of all the specified power modes — and only the specified modes. Verification has to make sure that all power modes transition in the right order. This sounds simple but a design with 200 managed power domains has a minimum verification state space of 2**200, or 1.6 x 1060.”

Questions abound with ultra-low power, always-on architectures. Is it really always on, or is it partially on? Are there actually gradations of always-on? What does this mean for prioritization and partitioning schemes if it doesn’t necessarily have to be running 100% all the time? How much of this type of architecture is application-dependent?

Also, Cruz said, devices being developed to interact with humans in always-on applications via a human interface that can include voice, motion and/or vision, are computationally intensive and use large, power-hungry processors, which greatly reduces battery life. “To mitigate the power consumption of these larger processors, smaller, ‘wake-up’ cores are now used to detect multiple sensor interface inputs and identify if a human interaction is about to begin, at which time it wakes up the bigger computation core.”

New math
Roland Jancke, head of the design methodology department in Fraunhofer IIS’ Engineering of Adaptive Systems Division, noted these considerations of how to integrate usage profiles into the development process are a challenge, and has heard customers say they don’t know how to design an always-on circuit. This has significant ramifications across the automotive ecosystem.

“In the automotive industry, it was known for years now that the car is, on average, 5% on and 95% off,” Jancke said. “But this is no longer true, since we have a lot of interfaces from the car to the outside world that are always on, like charging of the battery. If the car is off, the charging to a grid is on, so, it’s always on — either charging or discharging the battery,” Jancke said.

This is particularly evident with over-the-air updates, and it raises other concerns. “In the past, because it was assumed the car is only 5% on, the engineering team had a number of headroom approaches for aging estimation,” said Jancke. “If you operate the chip at 100%, then you have a factor of 20 in lifetime acceleration because in operation it will only be working 5% of the time. But if you now want to design a circuit that has to work at 100% of time, you have no way for simulation acceleration, except for temperature, but even this is limited. If a chip has to work at high temperatures, you cannot go very far in the temperature range without affecting the silicon, so therefore this is also a challenge in designing a chip. If you have no acceleration factor for these aging or degradation effects, you cannot test it, which is something which helps us to develop degradation models that still allow acceleration. If you know what the acceleration factor is, you can speed it up in simulation as far as you want, which is not possible in live testing.”

All of these factors make power estimation and analysis both more critical and more difficult.

Planning for low power
One of the big drivers behind always-on power is the move of more intelligence to the edge, where systems need to be kept up-to-date and, depending upon the application, aware of any changes in the environment. But they also need to be extremely low power, and that needs to be designed in at the architecture level.

“As processors get more complex, you need more digital multi-phase controllers and smart power stages,” said Sailesh Chittipeddi, executive vice president at Renesas Electronics America. “What you need to ask is, ‘What kind of compute power do I need? Or what kind of computing capabilities do I need. Is it optimized for the workload? The ultimate factor still has to be the lowest power consumption. And then the question becomes, do you put the connectivity on-board, or leave it outside? Or what do you do with that in terms of optimizing for power consumption? That’s something that has to be sorted out at a system level.”

The primary method engineering teams have used to lower power has been to lower the voltage. Clock gating is another commonly-used technique, which switches off the clock all together. But today’s designs make those approaches more difficult.

“Today, we’re looking at like 500 millivolts supply on some blocks, and even though in college I seem to remember the silicon required 0.7 volts to get across a gap, apparently that’s no longer the case,” said Marc Swinnen, product marketing director for the semiconductor business unit at Ansys. “These voltages are very low. That makes power analysis and power integrity much more important. When you had 1.5 volts to 1.9 volts, you could lose 100 millivolts as part of the margins since there was plenty there. But there’s not plenty left anymore, so you have to be much more careful about the power distribution. It has to be very finely analyzed down to the details, because margining is not an option. You need to understand exactly how much voltage drop you’re going to have.”

Mohammed Fahad, principal technical marketing engineer at Siemens EDA, agreed. “Across the spectrum of power estimation, there are various considerations to be made while designing power- and energy-efficient ICs, such as packaging, voltage drop, electromigration (di/dt), and temperature over-shoot.”

Power tools can perform peak power and di/dt analysis to identify the part of design and simulation windows where these peaks of power and sudden spikes of current are drawn. That information then can be used to reduce electromigration, cross-talk, and other signal integrity issues when chip is fabricated and packaged. Power estimation tools also can help to identify problems involving power consumption, but they can’t fix those problems.

“Engineering teams need to use power optimization techniques in order to bring the power shoot-ups back into the power budgets, and for that reason there are various techniques that one may adopt depending upon cost and time-to-market considerations,” Fahad said.

Additionally, among the many design considerations, the designer must bear the timing and area impact in mind while addressing the design for low power and high performance.

“When optimizing the design for power, low power synthesis tools are able to take timing aspects of the design into consideration by way of constraining the design for Vth distribution across the logic synthesis,” Fahad said. “The selection of multi-Vth transistors helps in meeting the timing path requirements. For example, building the timing-critical logic with low Vth cells makes the signals transfer quickly, but causes more leakage power dissipation. On the other hand, using the high Vth cell reduces the leakage power, but is good to be used only on the slower logic paths. Power-aware clock gating at the RTL level (where the impact of power reduction is maximum) is among the most popular approaches for reducing the higher power consumption, but can cause extra logic on the real-estate of the chip. Guided power optimization, for example, suggests power-saving enable expressions with lowest area overhead (which is often negligible compared to overall gate count of the design). What designers often need to consider is the role of implementing a suggested enable expression. The choice is down to power saving versus cost/effort associated with a particular power saving suggestion.”

Designers also look for opportunities at the RTL level where they can reduce power and timing criticality of the logic blocks by way of restructuring the coded RTL. Power optimization tools come with a capability called micro-architectural guidance, which can help the user re-implement a part of the logic that when implemented otherwise may save a good amount of power, as well as save relax area and timing requirements.

There are also secondary effects to consider. “Some of these resistances are temperature-dependent,” said Ansys’ Swinnen. “Because voltage drop is associated with resistance, it’s temperature-dependent. In today’s advanced nodes, the temperature varies across the die, and there will be variability in that area that needs to be taken into account.”

However, most important to contend with now is dynamic voltage drop. “Voltage drop used to be statically analyzed by looking at an average consumption for the gates to see if the grid was big enough to supply the average requirement,” he said. “Now what we see is static voltage drop, or self-drop. If a gate switches, it draws power and pulls down some voltage. That’s self-drop, and only accounts for 15% of voltage drop of maximum voltage drop these days. Most of it is from dynamic voltage drop, which means your neighbors are switching, and the pattern constellation of neighbors that happen to switch at the same time as you are switching can all together pull down your local power much more.”

Another problem is that chips don’t get physically bigger, but they do get electrically bigger.

“It used to be that a driver could drive a signal all the way across from one end to the chip to the other. That’s not the case anymore,” Swinnen said. “You need multiple repeaters just to get across the chip, so electrically it’s gotten a lot bigger. It’s the same with the power network and the highly resistive power wires now because they’re so thin, which means that when you pull down at the local power, current rushes in from the periphery to fill that. But because the periphery is so far away, and there’s so much resistance between you and the periphery, it takes a long time for that power to arrive. The local power supply can dip quite a bit no matter what you do farther away, such that you can’t rely on streaming power from the edge all the way in quickly because there’s too much resistance.”

At the same time, it’s not realistic to assume that everything will be switching, which makes this a very difficult problem.

“The industry is grappling with how to manage dynamic voltage drop when nobody wants to build in margin, but there are so many billions of possible switching scenarios,” Swinnen noted. “You can say, ‘We’ll throw some vectors at it,’ but those vectors cover every possible use scenario. Traditionally, people fix the symptom. They do an analysis, they run a number of vectors, and they see if the gates are all seeing excessive voltage drop. You can upsize those gates or you can move away big aggressors somewhere, but it’s very much treating the symptoms. Advanced power metrics techniques should be used to analyze the root cause of those issues, such that when there are multiple scenarios, it can be determined how much each aggressor contributes to the voltage drop. It’s a small fraction of the aggressors that are actually causing the bulk of the problems. So rather than treating the symptoms, identify the much smaller subset of significant aggressors, fix those, and then when you move one aggressor out of the way, you fix 20 or 30 voltage drop problems. That is a much more efficient way of going about fixing this — finding the root cause and treating the cause, not the symptom.”

Synopsys’ Cruz advises designers always keep in mind that low power consumption is a key requirement specifically for battery-operated devices, and the always-on sensor monitoring which is typically part of a larger SoC needs have a combination of ultra-low power control and DSP capabilities to be able to process sensor inputs to detect events in an ultra-low power budget (<<1 mW), i.e. face detection. At the same time it also wakes up other parts of the SoC upon specific events, i.e. wake-up vision processor for face recognition.”

Looking ahead
The always-on architecture approach has been evolving over the past few years on the audio side with keyword spotting. “This is when you say, ‘Hey Google,’ or ‘Hey Alexa,'” said Cadence’s Borkar. “Some portion of the device is listening for that small amount of processing. When it hears that, then it wakes up the subsequent processing blocks to hear the more refined command. This provides the best battery life conservation.”

That’s true for other devices, as well. “If you take that same analogy, and you move that over to vision processing, or camera-based processing, the new trend is visual wake words,” Borkar said. “The question can now be whether I want to do a very basic form of person detection or face detection, such as, ‘Is there a face near the camera that I can identify as a face?’ This is very basic processing to say, ‘Yes, there is a face, I should start doing some facial authentication.’ This is of interest to developers that have microphones in their devices, and now they want to add a camera. It could be useful in settings where there’s a lot of noise that might not be able to be filtered out. The camera could be used to determine if there is a person looking at me, or there’s a person nearby. In a video doorbell application, for example, it could use person detection to know there is a person within the proximity of the doorbell so it can move on to more accurate identification when the first screening has sufficed.”

Fundamentally, design engineers need to keep in mind that power is an electrical problem, which requires a knowledge of the manufacturing process, Siemens EDA’s Koster said. “For example, will pull-ups be tied to the switched power, or instead to the always-on power? Will signals crossing through power domains be turned off or feed through? What is the proper value to isolate a signal so that it will not interfere with active logic? These are some examples of what engineers need to keep in mind.”



Leave a Reply


(Note: This name will be displayed publicly)