Second of three parts: Who’s in charge of design teams; challenges in developing good power models; how many abstraction layers are necessary.
By Ed Sperling
Low-Power/High-Performance Design sat down to talk about how to write better software with Jan Rabaey, Donald O. Pederson Distinguished Professor at the University of California at Berkeley; Barry Pangrle, solutions architect for low-power design and verification at Mentor Graphics; Emily Shriver, research scientist at Intel; Alan Gibbons, principal engineer at Synopsys; and David Greenhill, director of design technology and EDA at Texas Instruments. What follows are excerpts of that conversation. The discussion was held in front of a live audience at the Design Automation Conference.
LPHP: Who should be in charge of SoC engineering teams? Should it be the software or hardware side?
Shriver: None of the above. If it’s a product you’re selling, then it should be a product person in charge of both of those teams.
Greenhill: I agree with that. A lot of the requirements have to get driven back with all the different applications. Each has its own set of requirements. Usually you have to design a chip before the software can run on it, so you tend to start out with the hardware design. But we find that if you do have a good dialog between the hardware architect and the software architect early on then it works better. It’s something you need to be very cognizant of. But frankly I don’t think it matters if the person in charge comes from a hardware background or a software background or a business background. What matters is that they have a holistic view and a good vision for what the end product’s requirements are.
LPHP: There’s been a lot of talk about power models. What difficulties do you encounter in developing a good power model?
Gibbons: What is a power model? Part of the problem is that we’re still trying to understand what we need to simulate. What should the content of the power model be? When do you need what type of information? The benefit of system simulation is that it can be fast. If we start littering it with unnecessary things then we will slow it down. If we want to win over the hearts and minds of software developers we have to give them automation that gives them the view they want without slowing them down or reducing their productivity. The challenge at the moment is understanding what a power model really is and how it morphs itself through the development process.
Greenhill: There is no single power model. A lot of the SPICE-level and gate-level power modeling that we’ve understood for awhile we now have to track to a software view, where we can abstract away enough of the details to understand the effects of different operations. That’s the really hard part at the moment. That’s something the EDA industry hasn’t fully addressed yet, which is why we’ve created our own internal models.
Pangrle: We can start out characterizing any layer, such as SPICE and moving up to standard cells. These are pretty well understood. It’s all automated. If I want to buy a library these days the power models come with it. I don’t have to figure out how to characterize it. You can probably make an argument that even at the RTL level it’s fuzzy. You need enough information that you can make the right technical tradeoffs. We’re certainly working with customers to help them build those models, but there’s room for automation there.
Rabaey: In 1994 one of my students published a paper at DAC on system-level power modeling tools. We still haven’t solved the system-level power-modeling problem. Why? Because there’s a fundamental difference between power and something like timing. Power depends very much on the environment. It’s a physical value. Trying to capture that is a very hard problem. It’s worse with variability. If you ask for the leakage number of a chip, it might be off by 200% or 300%. The data is variable, the environment is variable, the technology is variable. That’s why you’ll never have an accurate power model. But what you want to do is at least make some informed decisions at the architectural level. You need to use information dynamically. It’s a really hard problem and there isn’t an answer for it.
LPHP: How many different layers of abstractions do we need, both now and in the future?
Shriver: Ideally one, and it would cover everything. But that’s not reality.
Pangrle: We’re going to have gate-level abstractions for transistors. Where you go above that is open.
LPHP: The EDA industry has been pushing into ESL, with mixed results. We’re now starting to add in software and power. Will this work?
Gibbons: If we do our job right, then there won’t be a problem. If we make it cumbersome to use and don’t provide enough benefit, then the answer is no. But if we can automate it the way we’ve done with RTL to GDS II, then we can realistically design at a higher level. That will require correlation that works at the silicon level.
Pangrle: Customers said that they didn’t need a lot of the ESL stuff we created in the past. It’s different now. We’re starting to hear them say they really need something. There’s an opportunity to provide value there. In terms of models, there are a couple ways of looking at it, whether you build them bottom-up or top-down. The advantage of bottom up is that you’ve already got something you can characterize. On the other hand, if you look at what designers are doing now, it’s typically keeping spreadsheets. How accurate is that? We at least have the opportunity to do better than that.
Rabaey: Any designer at any time can never handle more than about four components simultaneously. If you have more elements you start making errors. This relates back to how many abstraction levels you have. You want to make sure you have an abstraction layer with a reasonable number of variables. That’s why we have abstractions. It might be inefficient, but you need to wrap something around them to deal with them. That’s why standard cell design was successful. It simplified design even though it created some inefficiency. The same thing will happen here. We have to be more aggressive. Hardware designers are in denial. The software engineers have moved on. They’re thinking about new languages that have emerged. They can make fairly complex software quickly with a small set of components. At the hardware level we’re still toying around with Verilog and System Verilog. So what are the components you want to work with as a hardware designer? What are the layers around that?
Greenhill: In theory, more abstraction is a good thing. You can do things at a higher level and make architectural tradeoffs and try different things you might not be able to do otherwise. We’ve tried this designing in parallel with Verilog and using a higher level of abstraction with SystemC. The C versions were coming out slower and bigger. It took a tremendous amount of effort and we couldn’t get the quality results. Engineering at a higher level actually took longer than it did in Verilog. I like the concept, but the reality isn’t there yet.
Leave a Reply