Experts At The Table: Improving The Efficiency Of Software

Last of three parts: Defining cores; communication between hardware and software teams; the need for power models; the impact of third-party IP.

popularity

By Ed Sperling
Low-Power/High-Performance Design sat down to talk about how to write better software with Jan Rabaey, Donald O. Pederson Distinguished Professor at the University of California at Berkeley; Barry Pangrle, solutions architect for low-power design and verification at Mentor Graphics; Emily Shriver, research scientist at Intel; Alan Gibbons, principal engineer at Synopsys; and David Greenhill, director of design technology and EDA at Texas Instruments. What follows are excerpts of that conversation. The discussion was held in front of a live audience at the Design Automation Conference.

LPHP: We have more and more cores in devices. Can they be modeled correctly to actually reduce power?
Gibbons: Part of the problem is we keep redefining what a core is. If you look at a core today, it’s a subsystem. It’s a core with memory interfaces. You have to be able to model it, and if you can’t model it you can’t move to system-level design with the core.

LPHP: Is there a communication problem between various teams?
Greenhill: Yes. People talk different languages. The software people think about things very differently from how the hardware people think about things. That makes it harder to optimize across all the different levels of abstraction. We try to nurture the people who have that shared vision across the different levels of abstraction and can talk to each other and understand each other. Those are really valuable people, and they’re key to coming up with a very good design.
Shriver: The advantage of having software and hardware separate is that you’re using a divide-and-conquer approach where each side is going off and solving complexity for a bit. Then there is a point where there is an advantage to having them work together. Maybe tools can help so the software teams can get more feedback as to what’s going on in hardware. But the software team also is just trying to get the software functional. There are different vectors we’re dealing with there.

LPHP: How important is it to have power models up front?
Rabaey: It’s always a good idea to understand your problem in advance. Building these power models is going to help in the major decision-making process. But because power is a dynamic entity—it depends on the environment you’re in and how you use it—understanding all of that up front is going to be very hard. Trying to build in all of those scenarios up front is going to be very hard. If I’m walking on the street trying to figure out where I should do computation, whether I should do it locally or somewhere remotely, it’s going to be a very different answer. It’s always worth investing in power management.

LPHP: We have lots of power management features in devices, but how do we actually verify that power management is working?
Greenhill: That’s an extremely complex question with multiple layers. We find that once you get to the system level that you do a lot of verification and it doesn’t interact with the software quite the way you expected it to. There is weird behavior, so it becomes a software debug problem. Often we find there are significant improvements just by getting the software right. If you get all the switches set the correct way and in the right mode, that makes a huge difference. If you develop the hardware and the software together, that’s a system debug kind of operation.
Shriver: Prototypes are a way of modeling the software and the hardware. Then you can run your power management from a functional perspective and that it’s going into a low power state, that you’ve turned off the register you thought you did and that it went off at the right time and you don’t get any dead-locks and live-locks. But it’s still not an easy effort. You have teams working on it to make this happen.
Greenhill: I agree with that. You do all these prototypes and they only get you so far. You don’t get to run all the software and try everything. It takes too long.

LPHP: The addition of more third-party IP means we are adding in black boxes with software we don’t necessarily fully understand. What effect does that have on power and system-level efforts?
Pangrle: You’re heading back to vertical integration issues. You’re writing software and you don’t know what platform it’s going to run on. It will make it tough to do any kind of optimization from a power standpoint. There has to be some information passed across that layer to be efficient.
Rabaey: This is not a new problem. The automotive industry builds cars with very difficult components, and ultimately everything has to fit together very well or you’re going to have problems. The same thing holds here. It’s like avionics where we have contract-based design. You write the contract very carefully. It’s a universally applicable model that can be used for the IP world, as well. The IP has to deliver that and that and that, and you can count on it. This is essential if you have black-box IP.

LPHP: Can we ever move beyond just observing problems with software to really understanding these problems ahead of time?
Rabaey: Observability is important, and you can ultimately learn everything, but it’s a very expensive endeavor. That’s why you have to do up-front power modeling. Differentiating what you can do up front, and then fine tuning later on is great. You have to have that information to do it—what are the interfaces and what kind of information can you get?
Pangrle: Those interfaces are what allow you to go back and fix the problem.



Leave a Reply


(Note: This name will be displayed publicly)