Estimating Power From Mobile Device Apps

Some of the challenges can be addressed with today’s modeling techniques—but the divide between hardware and software remains a sticking point.


By Ann Steffora Mutschler
How do software application developers – even the ones sitting at home on their living room sofas with laptops – measure the power consumption of their application on the target device? This is a big problem today (something that is painfully obvious to owners of iPhones or Blackberries), and it will only get bigger.

Software engineers may think it is not their problem. They can write whatever code they want, then push off the issues to the hardware engineers who, in fact, have limited control.

To be sure, a hardware/software co-design environment is eventually going to be the ‘new frontier’ with models of abstraction used at higher and higher levels so that engineers can emulate certain applications or functions. And, of course, new tools will be needed to take these considerations into account. But from all accounts, those tools may still be years away from the engineers’ workbench, let alone the software development kit of the at-home developer.

Ideally, if high-level models can be created that break through the RTL descriptions of the hardware to the transaction level, hardware information can be captured and brought up to the software applications, whether that includes power consumption, software domains, or the like. Then engineers could see the impact of software and modify hardware accordingly, said Vic Kulkarni, general manager and senior VP of the RTL business unit at Apache Design Solutions. “Today it is the reverse: because you use whatever hardware is available and then software developers they don’t really have knowledge of what that hardware is capable of doing as such.”

Pete Hardee, director of solutions marketing at Cadence Design Systems noted that today’s smart phones, as convergent devices, contain about as much computing power as stand-alone devices had recently. “A smart phone today can easily contain the same processing power as mainstream PCs or laptops had maybe four or five years ago.” They contain video capabilities that would have been set-top boxes just a couple of years ago; high-definition video, and 3- to 5-megapixel cameras. At the same time, while we’ve had enormous leaps in the hardware technology, obviously still following Moore’s Law, the leaps in software productivity have actually outpaced Moore’s Law to make that happen on a mobile device. The thing it hasn’t outpaced is poor old battery technology. So despite all of this going on, we’ve still got lithium-ion batteries. Designers have done a great job to squeeze what they can out of them, but fundamentally we still expect to get through at least a full working day and get home and put the phone on charge.”

Granted, it does depend what you’re doing with the phone, but bottom line is that all of it is under software control. “When you’re analyzing power it’s not just about characterization of the hardware. You have to run with a significant number of system modes that represent the high activity of when I’m busy on all these various applications but also represent the low activity when I’m not busy, and also switching between those system modes so I can work out when it’s worth powering down parts of the device and when it’s not,” he said.

The challenge for many chip companies today is the need to simulate 30 different system modes. In addition, they are painstakingly measuring the bandwidth in all of those modes, in various parts of the chip and working out exactly how the power management system needs to cope: what can be slowed down, what needs to be sped up so it can be shut down for longer. All of these various modes need to be checked out. “Being able to measure the power in response to real system activity running real software becomes a big deal and there are very, very few solutions that can do that,” he said.

The prevalent thinking of today leans towards virtual platforms to do this measurement, but Hardee believes they are too abstract to be able to measure the effects on power. “As soon as you really need to look at the power scheme that is implemented in the hardware then you need to run at an accuracy which is going to slow down a virtual platform.”

To be fair, Cadence’s approach does include virtual platforms through its transaction-level simulators, and integration with the fast processor models from ARM and various other processor models available, but the company stresses its hardware-based emulation system for power-aware simulation.

Shabtay Matalon, ESL market development manager at Mentor Graphics, believes engineers already are familiar with the notion of abstraction—they started by abstracting gates to RTL and now there is an abstraction of RTL functionality at the higher-level writing using SystemC and transaction-level modeling. “People are aware that you can also abstract timing by creating a model that doesn’t contain all the information but has sufficient information to get the notion of timing. What people may not be aware is that we can create a model that can be used by the software engineer that contains an abstraction of power all the way up to ESL or TLM.”

This model associates power with the traffic flowing through these transaction-level models. Once those models get created they can be stitched together, Matalon said. The models can be of peripherals, of processors, or of devices, and can be stitched together to create a platform on which applications software can run.

Virtual platforms are the way to go at the very high-level, agreed Cary Chin, director of technical marketing for Synopsys’ low-power solutions group. “There are some pretty good ways to hook into the software stack through a virtual platform. But I still think that the connection from the virtual platform on down through to high-level RTL is still a little bit broken because there’s a lot of stuff that needs to happen to connect those environments together.”

The big question to answer here, though, is how much we want the software developer to be controlling the hardware directly, he said. It’s basically directly up against the idea of information hiding. “In a software development environment we try to hide things because there are things we can’t actually decide better at high-level versus a low-level. Those concepts come in exactly when you’re spanning software down into the hardware realm, as well, so it’s very hard to tell. You want to write software that’s really transportable between environments and things like that, but if you’re tied into closely to a particular hardware platform it makes that very difficult, as well.”

Educating the software developer
“With all of this, it would still be possible to write bad software that is very inefficient in the way data is used—maybe something that unnecessarily continually refreshes the LCD screen, for instance,” said Hardee. “How people get feedback for that really boils down to the application development kits that are provided by either the phone manufacturer or the network operator (Sprint has an application development network). On phones that use Android, there’s a development system. It would be possible to give people feedback in terms of bad optimization, bad memory usage, etc. in those development kits.”

Part of the solution may be an ecosystem or partnership approach, as well. “The idea of [EDA vendors] at some point partnering with somebody like Apple or Google to really extend their development kits down might actually make as much sense as trying to build stuff up from the hardware side because those guys have a lot of resources and they could actually help a lot in terms of meeting in the middle,” Chin added.

But that still doesn’t solve one of the big issues, which is the great divide that exists between the software and hardware worlds. “The chasm between hardware and software is bigger than the chasm between front-end and back-end design. The two worlds are not really well connected today and ultimately, if you think about it from the software development standpoint, there are different levels of abstraction in some sense that one can think about. There are high-level programming languages like C/C++, and then there is the low-level programming which is assembly code,” noted Will Ruby, senior director of product engineering and applications at Apache Design Solutions.

At least some of this can be dealt in the short term by using models, but some will also require new technology such as smart compilers.

“Assembly is actually closer to hardware but people typically don’t program in assembly unless they are doing embedded programming. Somehow the notion of hardware needs to be transported into a C/C++ or Java-type development environment. That’s where the models come in. We need models to represent the hardware behavior, but I think we would also need something like a smart compiler that can take advantage of some of these hardware hooks and understand that if you’re writing a program for a mobile application, you need to make some tradeoffs during compilation for performance or power consumption. People on the hardware side think about this all the time, but on the software side it’s not easy to do. So compilers may need to evolve in that direction. Compilers need to be hardware-aware and need to understand what hardware is doing,” he concluded.

Leave a Reply

(Note: This name will be displayed publicly)