Experts At The Table: Improving The Efficiency Of Software

First of three parts: What tools are needed; why it’s taking so long to fix software; legacy vs. significant changes; pessimism and optimism ahead.

popularity

By Ed Sperling
Low-Power/High-Performance Design sat down to talk about how to write better software with Jan Rabaey, Donald O. Pederson Distinguished Professor at the University of California at Berkeley; Barry Pangrle, solutions architect for low-power design and verification at Mentor Graphics; Emily Shriver, research scientist at Intel; Alan Gibbons, principal engineer at Synopsys; and David Greenhill, director of design technology and EDA at Texas Instruments. What follows are excerpts of that conversation. The discussion was held in front of a live audience at the Design Automation Conference.

LPHP: What do we need to write more energy-efficient software?
Gibbons: Energy efficiency has to be a consideration for software development. Until it is, I don’t think we’ll see people developing it. But we do need quality metrics, especially for the mobile space.
Shriver: There’s a profile you can run on your hardware design to measure performance and power. We need that same kind of technology for the software. At each software check-in there needs to be a power number.
Rabaey: The software developers are so far away from a hardware implementation that they don’t even realize what it means to be energy-efficient. What you expect from a software perspective is an app that runs on one of your mobile devices. There are certain things you expect and a certain performance. But there’s also a user expectation that you’re not going to burn through your battery. The dialog that has to take place on the software side is, ‘This is what you need to do for resources.’ On the hardware side, there needs to be pushback that says, ‘You will only get this much because otherwise you’re going to use up all the resources.’ After that it’s a negotiation. That will enable the dynamic. Without that energy efficiency won’t happen. We’ve been talking about this for 10 years and I haven’t seen any change.
Greenhill: There is clearly a huge opportunity, though.
Gibbons: Ten years ago customers didn’t care about the power, as long as they hit 200MHz or whatever clock speed they needed. It was performance, performance, performance. There will come a time when the battery life will be a far more important metric on devices like a smart phone than what we see today. I hope it will change. You can optimize hardware and employ every technique under the sun to make sure that when a CPU is shut down the residual leakage is extremely small. But if the software never shuts it down it doesn’t matter. There will have to be some change.
Pangrle: There are a number of techniques in the operating system to help with that, even looking at routines for DVFS (dynamic voltage and frequency scaling) and how that impacts the scheduling. But if you look back a couple decades when we went from command-line capture to RTL, it didn’t happen overnight. If you never had an RTL design team before, who was going to do that? We’re starting to see a transformation in the industry. A lot of semiconductor companies today have more software engineers on staff than hardware engineers. That’s a big resource in terms of outlays and investment from companies. They want to make sure those engineers are being used in a productive manner. Somebody is going have to be in charge of it. We have one customer that was working with Linux and it just wouldn’t go into the lower power state. They had a terminal up and the cursor was blinking. It had to stay awake to blink the cursor on the screen. Which software engineer is the guy who checks that?

LPHP: What part of a smart phone draws the greatest power?
Gibbons: It really depends on how you’re using the device. There are all these different use cases. In some cases you’ll be doing MP3 playback and the rest of the device is off optimizing the audio playback. Then there are other use cases where you may be doing graphics so it’s running at full power and you may be downloading something off the Web at the same time. That’s a completely different set of dynamics. Sometimes you’re running full power, in other cases it may be 20% power.

LPHP: What’s going to really change things?
Rabaey: If you really want to make a breakthrough in power reduction we really need to step away from some of the assumptions we’re making about how implementation should be done. If you look at all the devices, what you care about is the data. Computation is an afterthought. The computation may be far more effective in certain power states. Maybe you should move instructions and not data. This obviously is not going to change in the next 10 years. There is a lot of legacy out there, and we have to consider that. But from an academic perspective, there absolutely will be changes here. That’s the only way to break the logjam.
Greenhill: The industry is moving away from running everything on a general-purpose ARM core, which is the least-efficient way of running anything. If you look at OMAP it’s a collection of hardware engines dedicated to different tasks. The ARM cores are there for a variety of reasons: (1) algorithms that don’t map onto other hardware, (2) For software reasons where it’s simpler to compile code to a general purpose processor, and (3) for running OS functions. Having said that we definitely don’t want to run more on ARM. Anything that is power critical as much as possible should be mapped to other more efficient hardware. Audio and video codecs and graphics are good examples of these types of hardware. I expect over time we’ll see more—as soon as research gives us other power efficient architectures.

LPHP: How do we break down the silos between hardware and software teams and improve communication between them?
Rabaey: There are multiple layers in the design abstraction process. Today, what is happening there is technology, there are circuits, there are architectures and applications. What has happened over the years in the processor world, the mobile world and the interconnect world is we have redefined interfaces and kept them barely functional. You want to make sure everything functions across the layers. But you have to have physical information going up and down. That very much relies on the interfaces. Some companies have decided that horizontalization is a bad idea, and that you have to go vertical again. This is the Apple model. You have to control the universe to manage between the layers. That’s a good model for awhile, but it’s not a scalable model. You only get scalability from a horizontal model. What they’ve decided, though, is that you have to have physical information annotated both in terms of performance and data resources like energy.
Pangrle: It’s becoming a marginalization problem. When change has to happen in the industry with a vertical model you change it within one company. But once the industry figures out how that all works, it opens it up horizontally again.



Leave a Reply


(Note: This name will be displayed publicly)