Experts At The Table: Making Software More Energy-Efficient

First of three parts: Understanding the differences between software and hardware; what the software engineer sees and can do about it; when to tackle the problem; what’s missing.

popularity

By Ed Sperling
Low-Power Engineering sat down to discuss software and power with Adam Kaiser, Nucleus RTOS architect at Mentor Graphics; Pete Hardee, marketing director at Cadence; Chris Rowen, CTO of Tensilica; Vic Kulkarni, senior vice president and general manager of Apache Design, and Bill Neifert, CTO of Carbon Design Systems. What follows are excerpts of that conversation.

LPE: Software is causing as many problems in low-power designs as anything else. How do we fix that?
Neifert: Software is creating problems everywhere because software is driving everything. Customers are using software for more and more stuff, including software-driven verification and architectural analysis. It’s only natural to take traditional back-end tasks such as power and move those forward to enable that analysis to be done earlier in the process and see how the software will impact the system. As software increasingly dominates power usage, because the processor and IP guys are getting smarter and smarter about turning things off when they’re not being used, you can’t just blindly use some verification vectors. You need to really see how it’s being used by the software. That requires virtual prototypes in conjunction with power tools.
Hardee: The software is controlling everything, but in recent years we’ve seen an explosion of applications doing all kinds of things you want to do on mobile devices. One thing we see people struggling with is the best way of optimizing for power for different applications, which have different needs. If you’re watching video, you have a frame rate to deal with. Rather than shut down, you’ll probably want to optimize framework lengths or run as slow as possible and make sure memory transactions are being accessed from cache. That’s completely different when you’re on the Web, where you want to run as fast as possible to get a good image and shut down. What you’re actually doing with a device will require very different power strategies.

LPE: So how do you analyze that?
Hardee: You have to run a lot of cycles in the various system modes to be able to model it. That’s where it really gets very disconnected from the test vectors in logic simulation. You can’t use those anymore. You have to have a set of vectors that is representative of the various application modes you have on the device. That’s a big change for a lot of people.
Kaiser: If you’re going to have software guys doing power optimization, you need to get them some kind of metric to measure or estimate the power. If you go to a software engineer today and ask them to optimize power, how do they know if they’re doing better or worse? Giving them a power meter helps. Giving them a power meter that self calibrates helps a lot more because the software guy really doesn’t know what calibration is for A-to-Ds. He sees negative current and figures he must be doing something wrong because he’s charging the battery. First give these engineers something to measure. Second, when you’re doing a product you have to identify all these different use cases. Internet surfing is one. Video playback is another. Software teams need to optimize individual use cases. But to do that, you have to figure out how much you’re going to be doing of what. A cell phone is sitting in idle 99% of the time, but 99% of the energy is not used on standby. So you have to look at the different use cases and figure out which use case you really want to optimize.
Kulkarni: What we’re really concerned with is energy consumption over time. Instantaneous power, dynamic power and static power are well known, but energy consumption over time is where software enters the picture and turns it into system-driven power consumption analysis. If you look at power times time, it’s the clock frequency and the duration of the cycle. That’s where the most of the applications are causing all these headaches. Why does a GPS application versus YouTube versus music have so many energy profiles? We can control instantaneous power and dynamic power quite well at RTL and below. Static power can be controlled quite well. But testbenches that look at the overall functional verification are not relevant anymore because you need to look at the states, too. And depending on which application is used, the energy will be different. We need to instrument that with respect to the power states. How you create the right test vector set or testbenches becomes a real challenge. Then, looking at average power and average voltage versus average current over time. That’s where the issue of co-simulation comes in. Running the complete simulation on a virtual platform becomes more interesting. At the moment, instrumenting the software is not possible.
Rowen: This is a serious problem. One problem is that the software guy has a simplified view of all the clever things the hardware guy did to make it possible to reduce the power. The hardware guys are obsessed with power these days. Not all of what they come up with is good or practical, but they are at least thinking about it. They have power modes, techniques for certain operations or instructions to meet power requirements. That may be pretty far removed from what the software guy, who’s on the front lines of getting a product out the door, has visibility into. It’s made worse by the fact that most the tools people have work well in a simulation environment, but often what happens at the end of the day is you have a programmer, a prototype board and an ammeter. It’s a very crude picture of what’s going on. The poor software guy is trying to figure out what he can do overnight that will move the ammeter.

LPE: We have no standard gauge for software. It varies by application, middleware and operating system as well as by the usage from one person to the next. How do we deal with this?
Hardee: If you look at the need for system vectors that reflect the application you’re running, that becomes a problem when you’re trying to provide a power meter for the software engineers. That works well if you can run actual software against the target device, which is the only chance where you can run it fast and get accurate readings. You need accurate vectors and you need accurate characterization to get any sense of power or energy. The difference between power and energy is you need to know over time what the system is doing and model that correctly. Virtual platforms have some potential to help, but they’re problematic. If you’ve got the kind of virtual platform that runs fast enough to make a software engineer happy, you’ll be modeling that at an untimed level. At the untimed level the virtual platform is instruction accurate, so it’s getting its timing and instruction cycles from the processor. If you think about what’s going on with software and the choices to process things, can you do it from cache or do you have to go out and fetch it from some other level of memory. Those have a huge impact on the number of clock cycles, rather than instruction cycles, that it takes to perform those tasks. So the point is that you need a lot of timing accuracy before you can get any kind of energy accuracy. That’s difficult to build into a virtual platform.
Kaiser: You don’t need the actual numbers. You just need to know if it’s getting better or worse. You can give the software team a relative number. Second, you can start doing estimations. With an MP3 you want to know what your cache-miss ratio is.
Hardee: You need to start to measure the things that you know drive high energy usage, as opposed to measuring the energy usage itself. When you have 400% or 500% difference between cache and memory, it’s hard to put different algorithms in the right order. You don’t even have the relative accuracy you’re looking for.
Kaiser: So are you looking at platform-to-platform comparisons? I’m thinking you take the platform and get the software guy to make it as best as it can be.
Hardee: You’re coming it at from the standpoint of post-hardware. How does the software guy optimize his programs?
Kaiser: Or even if you don’t have the physical hardware yet.
Hardee: I’m approaching it from the view of the system architect designing a new system. How do they know they’re going to meet the power spec? If you’re rendering graphics versus video, you have to be running the right algorithm on the right core. There are multiple choices, and you have to figure this out even before you measure things relatively, let alone absolutely.
Kaiser: That’s is a system architect challenge, not a software challenge. The most the system guy can do is identify the best possible scenario. The software guy may or may not come close. Sometimes it may not even be possible.
Hardee: And you can really mess up the software. The system architect does have to make an assumption what he’s building into the system architecture will be used efficiently by the software guys.
Rowen: You really want someone who has a deep understanding of what instructions to use, what compiler flags and power modes should be used, and what is the realistic scenario that will contribute to the worst case. In general, things are better when more things are programmable. The worst thing is where the controls are inside some obscure, hard-wired function unit. We had a big customer recently that had trouble meeting power goals. The fact that they were using programmable audio made it a lot easier to come up with another way to buffer the data and initiate the applications.
Neifert: When you have the chip, at least you have an ammeter sitting there. Before you have the chip, the software guy is in the dark. He often doesn’t have any indication of what’s going on in the system from a power perspective because there isn’t much there to tell him that.



Leave a Reply


(Note: This name will be displayed publicly)