Experts At The Table: Who Pays For Low Power?

First of three parts: The cost of power; tradeoffs of power and performance; low-hanging fruit; blame the software; mobile vs. enterprise concerns; customization vs. general-purpose design.

popularity

By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss the cost of low power with Fadi Gebara, research staff member for IBM’s Austin Research Lab; David Pan, associate professor in the department of electrical and computer engineering at the University of Texas; Aveek Sarkar, vice president of product engineering and support at Apache Design; and Tim Whitfield, director of engineering for the Hsinchu Design Center at ARM. What follows are excerpts of that conversation.

LPHP: Who ends up picking up the cost of power in a design?
Gebara: When you add more constraints it affects the design cost. If you look at everything from the transistors up through the stack, the hardest job is for the final integrators. How much can they pack together, how much can they spread things out, can they get their floor plan to fit? It turns out you probably can trade power for just about anything, so it’s often a cheap way out of any trouble you might find yourself in. You can back off on the frequency if you need to drop the power. You can always make everybody happy with it.
Sarkar: We’re seeing a lot of integration. Even in an automotive chip, what used to be a whole board of chips is now a single chip. But low power is driving integration in a different way. For integration, you have to go to the higher end of the process technology. Most of our customers are looking at 20nm or 16nm. The technology node has to be 300mm for any of the chips. The package technology hasn’t really changed too much in the past 10 years, but all of sudden we’re now talking about 2.5D, 3D and package-on-package. Then there is the cooling system. We have to worry about how to cool some of these 16nm devices so they don’t burn out. The EM limits are 30% tighter. There are so many different aspects of cost, and we’re not even talking about the tools or the design infrastructure challenge.
Whitfield: It’s getting more and more difficult to get something back from any part of the design. We took all the free stuff a few years back. We have to start looking at the system-level and every part of the ecosystem. We have look at the tools and getting accurate analysis in the tools very early. You can’t save enough power anymore with a different CPU or by changing libraries. You have to extract power from every part of the system, and that’s where the difficulty is.
Pan: Low power is everywhere. There is high k, new process technology, 3D-ICs, and through the architecture and software—it’s definitely the ecosystem.
Whitfield: 3D-ICs are interesting. The cost model is something that doesn’t work at the moment, but we’re all saying, ‘Next year, next year, next year.’ We’ve said that for six or seven years. It comes down to cost and who’s going to pay for it. Some of the foundries have some flows and there are many interesting things going on, but to actually fit that cost model to mobile—I’m not sure who’s going to pay for that.
Sarkar: Some of the TV companies have started doing it in production. But no matter how much you save with the architecture, there are still problems with software. We have heard about two software packages with similar functionality, but one drains the battery twice as fast as the other.
Whitfield: That’s a really good point because as hardware designers we spend an awful lot of time trying to squeeze out the last milliwatt of power and then the software is inefficient.

LPHP: Is there any low-hanging fruit left?
Sarkar: We get into these discussions with customers about savings. Do you start at the architectural level, at RTL or do you wait for the gate? From our standpoint, RTL is a reasonable tradeoff because you have enough data and the library and you can get the numbers. But is everything been squeezed out? We don’t think so. People haven’t had the tools so far. The designers have gone through every line of the CPU architecture, but they haven’t addressed power as aggressively as they could have. Power reduction isn’t just a push-button solution. We see more and more engineers who want to be button-pushers. There is a lot of opportunity to look at the data and make decisions about power.
Whitfield: People have to implement their systems effectively. The designers are probably doing a reasonable job. The foundries are doing a good job. And the tools are capable, even though they’re not always used properly. We definitely see some poor implementations of our IP before people are good with it. It takes a couple of iterations before people really understand how it interacts in a low-power setting. That’s low-hanging fruit. The physics behind it, unfortunately, is thick.
Sarkar: We went from planar to finFET to something else, but the low-power mindset has to comet into play, as well.

LPHP: So how far can we go?
Pan: You cannot talk about power just by itself. It’s power, performance, reliability and the process node. There is double and triple patterning. If your transistor is not ideal, then leakage may be too high. On top of that, people are talking about near-threshold computing where you have more transistors running at lower voltages. That offers a steep curve in terms of power savings.
Gebara: This is where it gets really interesting. ARM is very focused on power. But when you get into the enterprise space, it’s all about performance. Power is an afterthought. And it’s not until very recently, with the new mega-datacenters, that people are finding they can’t afford their electric bills anymore or they need a small power plant next to them. So all the major processor makers are questioning their customers about whether they need the performance all the time. If the answer is no, then you can add in power management. That’s where you throttle back to run at certain frequencies. We need a certain amount of power in there because if a plug is pulled accidentally, a redundant power supply has to be able to provide equal performance. But why is that the case? Why can’t you run at a reduced amount of power until someone plugs the cord back in? A lot of the limits aren’t coming from the technology, because no one in the enterprise is willing to give up performance. It’s coming from how you use your equipment and specialization. If you’re that sensitive to performance, where you don’t want to give up any performance, and you can’t deal with your electric bill, then we can create a piece a hardware that does the job 10 times better than a generalized piece of hardware.

LPHP: Do you specialize the software to go along with that?
Gebara: Absolutely. We have a sub-corporation inside of IBM, and the way they start their job is to say, ‘Don’t use general-purpose computing. We will build a specialized piece of equipment that will offer the lowest power and highest performance. So they may create a machine optimized for SQL. It’s very fast and very efficient, but it doesn’t do anything else. Specialization will be some of the answer here.

LPHP: Who pays on the mobile side?
Whitfield: That comes back to ARM’s philosophy of partnership. We all pay for it in some way. We innovate in and around the low-power techniques. We were one of the first to do dynamic voltage and frequency scaling. We have big.LITTLE and the ecosystem around that. But then we still rely on our partners to take those ideas—and they have freedom to do something—so you amortize the cost across their implementation. We’re paying for the R&D to get these designs out. And then you have to layer in the process guys, because we’re working with them earlier and earlier. Previously we would take a mature PDK and stick it into the CPU. Now we’re co-designing microarchitectures with the process. We’re not paying for that process development, but we are investing engineering time and effort.

LPHP: So do we continue to the next process node, or do we do the same thing better at existing nodes?
Gabara: It’s entirely possible. Specialization will allow you to put the brakes on technology a little bit. If you’re going to do general-purpose chips, you have to drive the technology down to the lowest node possible. Once you get to the point where you know what you’re going to do and you’re not going to do anything else, then you can start looking for low-hanging fruit again and drive it down. The value proposition is performance, but there’s also a value proposition in power.