Getting In the Ballpark

The challenges with power analysis may provide job security for system architects.

popularity

I admit it; I still have DAC on the brain. Even though attendance may not have been what the exhibitors would have liked to see, the conference is always a fantastic place to discuss ideas and pick up on trends. One topic I discussed with a number of folks are the challenges associated with design today, from the power-performance balance, 3D stacking to new process nodes and complexity, to name a few.

Specifically, power analysis is a fascinating area to me given all of the variables that are considered.

Following up on that idea, I had an opportunity to talk with Barry Pangrle, solutions architect for low power at Mentor Graphics recently about this very topic just after he attended the recent ISLPED conference.

In terms of what engineers are struggling with in terms of power analysis, he said, “ I think you can look across the whole spectrum of power analysis – whether you’re starting at a system level all the way down to back end to types of things like IR drop analysis and that kind of capability. It’s kind of interesting because especially just coming off of this conference, there were some people that were doing things with power gating, for example, and various games that they were playing with trying to look at basically other ways of controlling voltage on a power grid as opposed to just having to use some sort of DC converter to handle that. Obviously the reason for doing that is largely cost. Just having more power regulators typically drives up the cost. A lot of times, they are off chip so that’s something else that has to sit on the board. If you could come up with a simpler scheme, maybe like a resistive grid something – but obviously if you’re going to play that kind of game, you really need to have a good power analysis tool for your IR drop.”

In this case, the users were using an Apache tool for doing this, Pangrle said, but even there they had to use a simplified model. “Even though they were doing a very low level thing, they still had to abstract the model a little bit to try to get them in the ballpark. There are challenges all the way from the back-end to the front-end. Somebody who’s starting with a clean design from scratch – suppose I’m thinking about something I might turn into RTL – how do I even estimate where I’m going to be? Can I even get in the ballpark by the time I generate the RTL and actually synthesize it?”

Interestingly, with timing, since designs are built typically around synchronous models, clock frequency is targeted for the design and then the design team starts talking in terms of how many cycles will it take to do something, he observed. “From a power standpoint, it’s a lot fuzzier because a lot of it depends on activity and I think one of the things that people are looking at really is trying to just get a better handle on what’s the expected activity. If I’m doing a transaction on it, and it’s going to be pushing some data around and performing some operation on it, really tracking what is happening with that data, how much activity is it generating, even looking at different ways of doing the computation on that data.”

Pangrle believes that going forward there will be a lot more emphasis on things like caching schemes, to the extent that if the data is not next to the computation elements a lot of energy can be wasted just moving around data without actually doing any useful computation on it. “Especially if you look at it from the standpoint of pushing the data off chip to external DRAM, the parasitics of going off chip are so much higher than if it stays on the chip that just pushing that data back and forth across those lines is going to eat up a lot of energy and cost a lot more power.”

And this type of analysis opens up opportunities from an architectural perspective.

“We’ve entered this regime especially since around 2004, if you look at x86 processors, clock speeds remain top in that 3Ghz kind of range but we’ve kind of been stuck there. There’s a lot of work and research that went on from an architecture standpoint and it didn’t get implemented as often because every year there was another single-threaded processor that was coming out that was twice as fast as the other one. Once that game kind of stopped, you started seeing multicore, multiprocessing chips…it’s a reality. Now you look and it’s ‘Oh my gosh, everybody’s got multiple cores on their chip,’” he reminded.

“With the power part of it now, I think it should make the architects happy again because its something that’s going to really put more value on architectural innovation and really thinking about the way that you’re accomplishing that functionality as opposed to just having this fast, general-purpose processor that you can throw any problem at,” Pangrle added.

And that’s what you call job security for the architects.

–Ann Steffora Mutschler



Leave a Reply


(Note: This name will be displayed publicly)