Experts At The Table: Building A Better Mousetrap

First of three parts: Why architects are reluctant to take full advantage of the tools and models for slashing power.

popularity

Low-Power Design sat down with Richard Zarr, chief technologist for the PowerWise Brand at National Semiconductor; Jon McDonald, technical marketing engineer in Mentor Graphics’ design creation business unit; Prasad Subramaniam, vice president of design technology at eSilicon; Steve Carlson, vice president of marketing at Cadence Design Systems, and David Allen, product director for power at Atrenta. What follows are excerpts of that conversation.

By Ed Sperling
LPD: What’s the big problem in low-power design?
Subramaniam: The biggest issue is that power has to be done at the architectural level. Most of the time people have their architectural design already finished and then they try to optimize power. They’re already too late in the game. They’re going to get some optimization, but it’s not enough.
Allen: It’s practically impossible to retrofit power domains into an existing design.
McDonald: That’s where the big gains are. You have to tune the hardware and software to the workload. Then you can save a tremendous amount of power. At the RTL level or below, the most you can hope for is about 20%. That’s stripping out gates. When you go process node to process node, you get a little bit of a savings, too. But the big gains are when you change the architecture and the way the system works. That’s where everyone’s struggling right now, because to get there you have to have good architectural models.

LPD: Is there resistance to that among design engineers?
Allen: There’s a big hurdle to get to your first power domain design. Chip designers in the 1980s and 1990s were trained to deal with global power. To get to the point where you have two power domains is a big hurdle. We have to help people get over that hurdle. Once you get to two domains, you can accept the risk and go to five or more domains and get these really big savings.
Subramaniam: Even at the architectural level people are reluctant to use multiple power domains in their design because they don’t want to complicate their system.
They don’t want to have multiple voltage regulators. A chip already requires two voltages, one for the I/O and one for the core. They don’t want to go beyond that. We need to get that mindset changed.
Allen: That’s certainly a hurdle. But it’s possible to do a multiple power domain design where blocks switch off. Once you have multiple power domains, you can get huge power savings by turning off blocks in your design. That doesn’t complicate the board design, but it does complicate the chip physical design.
Zarr: There is always an issue of moving between domains, especially if there’s dynamic scaling. You need level shifting or isolation cells.
Subramaniam: Level shifters are designed to go one way, either low to high or high to low. You need to take extra pains to make sure level shifters can go in either direction. You may be operating on two power domains, and either one of these can go lower than the other. The level shifter has to be able to manage that.
Carlson: It’s still a small percentage of designs that are taking advantage of all the techniques that are available. There is the educational component and there’s also a risk component. People don’t understand how to do it, what the impact will be on the methodology, and when they start looking at the risk they’re not sure what they’re going to get in terms of a payoff. They know energy efficiency is becoming a competitive imperative, but time to market may be more important than battery life because they can re-spin and refine over time and take advantage of an LP process next time.

LPD: Still, isn’t the real key writing software differently to take advantage of individual cores?
Carlson: There’s a non-obvious tradeoff. There’s a 1000 to 1 efficiency gain in the software versus the hardwire. There’s leakage issues, and all of this is non-obvious. It requires looking deep into your target process.
McDonald: There are two levels of problems. One is the details of the implementation—the level shifters and what it takes to actually implement the power domains and be able to shut things off. Then there’s the tradeoff of what that means. If you don’t really understand what your architecture needs to do, then it’s difficult to know what you can take advantage of and what the benefit is.
Carlson: Yes, what’s the overhead of the level shifters and the isolation logic and the power switches? What’s the reliability impact? And how does that compare to another strategy that might offer the same power? What’s the relative cost and the schedule difference?
Subramaniam: People are looking for a simple solution. One of the reasons multiple power domains are not generally used is it’s too complicated. The simpler the solution, the easier it is to implement, both in terms of physical design and architecture. People are trying to figure out what is the best simple solution that can optimize their power. Even though these techniques are available, people don’t seem to be using them because they’re just too hard to use.
Allen: There is a lot of risk. People would like to push clock gating to another level so they don’t need to go to multiple domains, but that approach runs out of steam.
Subramaniam: People are relying on the tools to do the job. Here’s the RTL, let the tools do the clock gating and manage the power.

LPD: Is there any model that can encompass these kinds of complexities?
Carlson: Yes. You can specify different power strategies across your chip and estimate the impact, both from an overhead and a power-savings standpoint. And you can do this very early in the process. You can address floor planning, different voltage levels, shutoffs, and frequency scaling, and you don’t need to be a power expert to figure out what the tradeoffs will be. At the TLM level, you have to start figuring out what effect this will have on the functional aspects and look at the contours of the dynamic voltage.
McDonald: But the static side does tie into that, as well. You need to know where you can shut it down.
Allen: When we work with some of the cell phone manufacturers, there’s a power architect guy, and that guy uses a tool like this to make these tradeoffs.

LPD: Let’s do a reality check. What do the chipmakers think?
Zarr: I work directly with our customers that implement our technology, which is adaptable to voltage scaling. It’s a dynamic voltage scaling technology. The first issue we find is people don’t want to create voltage islands. They want one voltage and they want it simple. Once you get over that hurdle, the next problem is integration of a lot of third-party IP that’s not compliant with this type of architecture. One thing that we need to fix as an industry is to ensure IP blocks have the ability to sit in independent voltage islands. One of the big offenders is RAM, because RAM has a problem with retention at lower voltages. You always have to isolate RAM, and in most of the systems I see RAM is very important. Maybe it’s a standardization, or maybe it’s an option that companies provide in their IP.



Leave a Reply


(Note: This name will be displayed publicly)