Too Big To Handle?

As power management tasks are being pushed up into the software realm, can embedded software engineers handle the problem on their own?


By Ann Steffora Mutschler
With the insatiable demand for power efficiency today, the power management tasks have been pushed up into the realm of the software engineer due to the sheer complexity of the hardware design and the demands on the hardware designer to get their part right.

Managing power properly in embedded software boils down to really understanding the application and how it interacts with the hardware. This translates to many layers of nearly overwhelming software complexity.

“When you look at embedded systems there are big differences between what my refrigerator is doing versus what my cellphone is doing,” said Chris Rowen, Cadence fellow. “The refrigerator is an always-on appliance and needs to be very energy-efficient, but it doesn’t change modes all that often. It’s really something which is likely to be off most of the time and it’s really about managing power when it’s not doing anything. In the cell phone you have a much more complicated situation. It needs to have very low power standby, but battery life today is largely determined by what does it do when you’re actually doing it—when you’re talking, when you’re browsing, when you’re playing a game. Those really dominate battery life, so those are the things that you have to manage very explicitly as to how efficient the computing is.”

With software issues, it has to do in both cases with scenarios—how is the product being used, what are the different sleep modes when the refrigerator is asleep, and how asleep is it? “The software plays a big role in power management because it’s really only at the application level that you know anything, such as whether you’re supposed to be in light sleep or deep sleep or actually doing something. So it is incumbent on the embedded software to know what the state is and to be able to set the different power modes and turn on and off different power domains within that chip and to use the appropriate low or high power, low- or high-bandwidth communications with the rest of the network in order to do the right thing. That’s half the battle—knowing how alert to be,” he continued.

Power management may not sound very complicated to some embedded software engineers. In fact, some of the less-complicated features such as idle tic suppression, which many embedded software providers supply, is a no-brainer and can be done in a bare metal code loop without really any major engineering effort, said Rich Rejmaniak, technical marketing engineer at Mentor Graphics.

However, the trick is if you start to realize that you want to start to shut things down and one process realizes it’s done, it’s going to shut down and it’s going to issue a low-power request. But it can’t be done at that moment because, let’s say, the Wi-Fi is in the middle of transmitting a packet or receiving a packet. The request still needs to be issued, but if it can’t be done now, it has to come back to it later, he explained.

“That’s more complex software,” Rejmaniak said. “For software to make a decision: ‘Should I shut down or not?’ I’d better go check the state of all these systems using this and then I have to make a decision. If the decision is wrong I have to come back later, I have to schedule a process to come back a later time. This is a fairly complex process.”

The bigger picture trend in all of this, pointed out Mark Mitchell, director of embedded tools at Mentor Graphics, is if you look beyond power management there is a trend over the last three decades to expose more and more control of processors into software. “We went from CISC to RISC on instruction sets, but in general the hardware designers are offering us more and more capabilities. Rather than trying to figure out how to solve all the problems in hardware what they are basically saying is, ‘Here is a giant array of knobs to twist and switches to turn. Software guys, you go figure it out.’”

Even at a microarchitectural level, which is a key area where engineers have extracted really big performance increases and power reductions from designs, they’ve frequently taken stuff out of silicon and moved it to the software level. “So all this stuff keeps getting pushed up into the software layer, and power management is a great example of this. You look at some of the new modern embedded processors coming out from TI or Renesas, etc., and they are heterogeneous, multicore, perhaps have some Cortex A15s or Cortex A7s and maybe some M3s and a DSP or two and a GPU or two, and they are all tied together with a bunch of peripherals. And all that stuff is under software power management control. You can turn on the cores, you can turn off the cores, make them faster, make them slower, bring up peripherals, bring down a peripheral—you can do anything you want.”

The problem is that once things are exposed in software, the complexity goes through the roof and becomes unmanageable. “When the only knob you had to turn was related to what you want the clock frequency to be, you could probably accomplish something and do okay, but the reason for needing large policy frameworks is that complexity has gone beyond what a programmer can afford to create or can mentally handle on a device by device basis. Just as in the same way that people used to write in assembly code and now we’ve got compilers to do that work for us, because they got too hard to work out all the individual machine instructions, the same thing is happening on a power management side. Specialized software layers are taking over the management of stuff that is just beyond our ability as programmers to handle,” Mitchell asserted.

An interesting parallel
In the past, it used to be that embedded software was all about performance optimizations, said William Ruby, senior director of RTL power product engineering at Apache Design, “because you wanted to run lower-cost CPUs and you wanted to cram it all into the smallest memory size to use the lowest-cost memory. But now it is becoming more about power, and it has an uncanny resemblance to the hardware flow.”

In a digital hardware flow, there is a specification, but then you dive into RTL coding, do synthesis, get a netlist and do a physical implementation, he said. “It’s kind of that way in software as well. The actual software code is kind of like RTL, and then you have compilers. Strangely enough, synthesis tools in the hardware design flow are also called compilers. And then you have stuff that actually runs on a particular system, which could be the hardware description, it could be gates or layout, and then you have the actual object code on the software side.”

Engineering teams are starting to worry about optimizing software for power, putting in some ways and means of waiting for things to happen in the system. The goal is to create smarter ways of putting the processors in the idle mode. Ruby recalled a conversation with a customer to illustrate this. “A customer called me up and said, ‘I thought that your RTL power tool had a bug because it told us that this particular block was consuming more power in the idle mode than the active mode, how is that even possible?’ And then they said, ‘Then we realized what was going on. It turns out that in the so-called idle mode there was a memory inside the block that was being clocked all the time and not really doing anything useful.’ But in the active mode the memory was selected and deselected at appropriate times, so it was actually wasting more power in the idle mode.”

From the software perspective, the trick is first to put the system in a real idle mode as much as possible and then to find the best way to wake it up. There are interrupts coming in, but depending on how fast things need to be processed, maybe the CPU could just wake up by itself once every 10 milliseconds to see if there’s anything coming in to be processed and then just go back to sleep—like a self-idling scenario, Ruby suggested.

“Fundamentally, software always runs on hardware so what you can do from the software perspective has to be very intimately tied, at least from the power side, to what hardware is actually capable of. Is the hardware capable of complete shutdown in idle mode, in which case you have to have the hardware designed in such a way as to retain certain states and things like that? Is the hardware capable of working with the changing clock frequencies or changing voltages,” he said.

Active power management
Minimizing power when idle is important, but to Rowen, even more important is how to minimize power when in active mode. “Suppose you are running a game or filming a video or listening to music or browsing the Web and watching YouTube, what do you do then? You can’t say, ‘Shut me down.’ It may be that the applications can identify which subsystem they need and don’t need. So you may say, ‘I’m watching a video and that means I don’t need 3-D graphics so I can actually power down 3D graphics,’ or I may say, ‘I’m really requiring a relatively low level of overall computational throughput. I can have one of my ARM Cortex A7s running and not any of my Cortex A15s, because those run on a great deal of power.’ Or you may say, ‘For rendering this page I do need to turn on the power hungry A15s for a few seconds,’ or, ‘I have this really energy efficient imaging processor I’m going to offload all of these computationally intensive tasks to the imaging coprocessor because I know that it is going to be 5 or 10 times more energy-efficient than running it on my GPU or my CPU.’”

As these examples illustrate, there are a lot of choices about not just what code to run—that may be dictated by the application—but where to run it. “Do I run it in a low-power CPU, a high-power CPU, a GPU, an imaging processor, or do I happen to have some hardwired engine that does that particular thing without requiring very much processor intervention at all? People are thinking about all these different grades of processing that are working together in coordination. One of the truisms of modern Moore’s Law and system-on-chip is that the silicon is cheap but the power that it dissipates is expensive, and it’s going even further in that direction. Therefore, people are thinking about having a range of different computing choices. There’s a whole smorgasbord, a salad bar of processors,” he added.

This isn’t trivial work. Still, if some policy frameworks are put in place to manage that and give the application programmers a simpler view of the world, it can ease some of the burden. Otherwise, the embedded software task is intractable, Mitchell concluded.

Leave a Reply

(Note: This name will be displayed publicly)