Embedded Power Management Challenges Grow

The number of variables is increasing, and so are the number of components involved in any decision about power.

popularity

By Ann Steffora Mutschler
Power management always has been, and will continue to be, a big issue with electronic devices. But when it comes to power management in embedded systems—controlling battery power in a smartphone, an industrial automation or automotive application, among a myriad of other options—the approaches come with different variables.

For example, deeply embedded systems in phones for improving battery power, or in dog collars with GPS tracking, do not operate all the time. And at the very least, they don’t have to operate in full all the time, explained Richard Rejmaniak, technical marketing engineer at Mentor Graphics. This means that the devices must have the intelligence to operate as little as possible to accomplish the task at hand, thereby extending the overall battery life.

“If your cell phone display was on all the time it wouldn’t last an hour and a half. So what they do is they are very greedy about keeping the display off. The same goes with any type of radio lines. In cell phones, the radio system is designed from the protocol from the very beginning to operate at an extremely low level until it’s needed for conversation or data. Then it jacks up. That’s why if you turn on Wi-Fi on your cell phone, it’s on all the time. That’s why it seems to die really fast.”

What becomes clear very quickly is that power management of embedded systems is a play on both the hardware and software sides of the world, noted Tom De Schutter, senior product marketing manager at Synopsys. “Companies such as ARM keep coming up with new processors to do things better or to swap based on the concept of having something that is powerful and consumes more power and can swap to something else that consumes less power on the same chip. If you look at the mobile phone, 90% of the time it’s not doing any big tasks, so should it run this big processor? That’s where it’s really important to now come up with a framework that supports both those two extreme use cases and then also to figure out how to do the tasks in between.”

Bernard Murphy, chief technology officer at Atrenta breaks it down further beyond embedded versus non-embedded. “It’s consumer embedded versus non-consumer embedded versus non-embedded. When we think about consumer stuff—iPhone, Galaxy 3S, tablets, things of that nature—they are very, very hardware-feature rich. There are a lot of capabilities on them. They’ve got MP3 and video and GPS and altimeters, near-field—a ton of which is going to be accessed by the user in a normal flow of usage that is quite unpredictable.”

Therefore, he continued, “you’ll find all the guys who work on smart phones have this concept of use cases where they are trying to say, ‘The user is going to be listening to music on their MP3 and then a call is going to come in, which is going to be 3G or 4G, and that’s going to last two minutes and then they’re going to stop doing that and go browse the Web or maybe they’ll go browse the Web while the call is on.’ They build the very complex use cases that try to guess how people are going to use all of these features on the phone. But just step back and think how predictable is that process. Not very. We can be doing anything, so it’s really difficult to do more than come up with standard ways to save power in that case.”

Even with switchable voltage domains, switchable power domains, clock gating and biasing, it’s difficult to bubble any of that up to the software applications. The software engineers have no idea what else is going to be going on while they are doing tasks. They might say it’s okay to power down the MP3 player, but someone else may say you can’t power down the Web browsing because they’re looking something up while they’re on a call.

When it comes to embedded applications in the industrial or automotive space, things are a bit more predictable. “In embedded processes in an automotive network if you have a process that is dealing with the brakes then it’s only got one task,” Murphy said. “It doesn’t have to worry about switching these things in and out. Of course the infotainment piece of it is just as bad as a feature phone. That part you can’t do much about. Non-embedded is much simpler. You either turn the chip off or you turn it on.”

Frank Schirrmeister, group director of product marketing for system development in the system and software realization group at Cadence, observed that there are two aspects of the power in embedded systems. One is the notion of how to reduce the power consumption of the actual embedded system that is then interacting with its environment. The second is how to power the system itself.

From a design perspective, that’s really all about understanding the environment. And in order to interact with that environment, there must be flexibility, which comes back to software.

The way an operating environment for a system works today is that there are upper and lower boundaries of how the system works that are then switched on and off, he said. “ARM’s idea of dark silicon plays into it, but you are wasting space because if you look into the environment, requirements change over time. If I have an embedded system like an M0 or an M4 base design from ARM, even the old power budget (the upper and lower limit) doesn’t work anymore. It needs to be able to change over time and sense the environment’s requirements. It all comes back to very smart power management in the environment, and that again comes back to software. I need to be able to sense in my environment what’s going on, what are the requirements, and then switch on and off components in the system on demand even at the interface level.”

From a design perspective that means modeling the system with a virtual piece and dynamically adapting it to the environment, which is where things like power annotated virtual platforms come in. The system can be modeled, the power data is annotated and then the software is written very flexibly, Schirrmeister said. “You need to prepare for the unexpected, which always leads toward software approaches, and you need to be very advanced in sensing and adjusting.”

Chipping away at the problem
There is no silver bullet and no single tool or methodology that will solve this problem. “The fact that it spans all the way from process all the way up to application makes it very difficult to find a unique solution,” said Atrenta’s Murphy. “The number of ways of saving power is significant. There are a lot of things you can do, and none of those really overlap strongly with any of the others. It’s going to continue to be the kind of problem where you just chip away at it from multiple directions and, except in some industrial applications, there isn’t going to be much impact from the software layer because it’s just too difficult to predict what the software impact is going to be in a lot of different circumstances.”

Ideally, there would be some brilliant, automated tool flow to design and manage the power in embedded systems, but that is not where the industry is at today for the most part.

However, Mentor’s Rejmaniak said that up until now it’s been very much a skill set because it’s fairly new and it takes a while after new developments come on. “It’s only in the last 10 years that these extremely powerful processors running at very low power have become available. Prior to that the closest you got to an embedded system was the garage door opener, which is a different world entirely.”

While power management systems in big servers were integrated into the operating system, in this embedded world until now engineering teams have had to design power management all the way down to the ground level because it just wasn’t available any other way. Mentor believes changes are afoot, judging from the vantage point of its Nucleus framework, where each peripheral system with its device driver interacts within a middleware framework while the application choreographs the operations within the system.

“This automates the shut down and bring back process that occurs when the power needs to come back up, and it does it in sequence. A lot of these processors have the ability to shift your clock frequency while you’re running and then lower the clock frequency and the power. The problem is, when we change clock frequency all the timing of all your peripherals has to be recalculated. We put in the framework an opportunity for the driver to accept that. The person writing the application code and the person designing the system know it happened, but they don’t have to care about how or when or what the details are. And I can just make one function call instead of kicking off huge amounts of untested code that they have to develop on their own,” Rejmaniak added.

Fortunately, this is an area with many minds on it.

Another approach, De Schutter suggested, is for SoC developers to create a VDK for each specific SoC and then do the annotation with the power information on top of that. “It basically creates a test suite that you need to go through or a test suite that you need to pass on the power side. Right now you probably have tests that look at performance. So you could think about more of a framework where you provide the information or even you enforce information and enforce a kind of sub-test suite.”

Of course, a lot of the power management is now a subsystem by itself. “For instance there is a Cortex M3 or M4 from ARM, which is controlling the entire dynamic voltage and frequency scaling,” De Schutter said. “For a typical SoC, just the power bring up and power integration software by itself is multiple thousands lines of code. So now you have an entire subsystem whose primary function is to deal with power and control the power. That’s another piece that comes into play both on the hardware developer side and on the software side.”

This complexity lends itself to virtual prototyping, to be sure. “The software side has become so complex on top of the hardware side that the need to do it right and to do it together with the hardware especially that last part has become really important,” De Schutter concluded.



Leave a Reply


(Note: This name will be displayed publicly)