Experts At The Table: Who Pays For Low Power?

Last of three parts: Power vs. time to market; eking out lower power after tapeout; business concerns; near-threshold and sub-threshold computing; software; new materials; new approaches.


By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss the cost of low power with Fadi Gebara, research staff member for IBM’s Austin Research Lab; David Pan, associate professor in the department of electrical and computer engineering at the University of Texas; Aveek Sarkar, vice president of product engineering and support at Apache Design; and Tim Whitfield, director of engineering for the Hsinchu Design Center at ARM. What follows are excerpts of that conversation.

LPHP: Is the focus just on getting the chip out the door, or are companies willing to invest in power techniques to really save energy?
Whitfield: Historically, it used to be very expensive to do power shutdown. The library guys started to develop power shutdown and they made it easy. When there’s a new idea not everyone can do everything right away. Even big.LITTLE is a complex system, and it takes time to educate people how you build a system and develop software. You get a couple people on the bleeding edge pushing that technology, and as it gets adopted issues get resolved, tools come into the commercial sphere for software, and then everyone catches up. There is an upfront cost and there are only so many people willing to pay that cost initially.
Gebara: The focus is on getting the chip out the door. If it was about squeezing every last drop out of a design, engineers would always find a way. There’s no impossible problem if money and time aren’t a problem. Power is the same way. At some point you have to say, ‘Enough,’ and save it for the next round. This is more of a business question rather than what engineers can do.

LPHP: It’s an iterative question, too, right? You might not get everything done in the first release.
Gebara: Absolutely. We’re all here to make money.
Whitfield: In the mobile space you have to hit a power budget, but that costs you money. The reason is with every generation of cell phone you’re packing in more and more stuff. It gets harder and harder to meet the power budget.
Sarkar: We do find customers willing to do more. Even though they have signed off RTL, they do more analysis to see what else they can squeeze out. You don’t tape out the chip and that’s production quality. There are multiple versions of it. They may go off and work on the next generation of the chip, but the mindset is changing about doing more.
Gebara: If you have a customer that is going to GA (general availability) this year, they’re not going to take an extra six months to squeeze out another 5%. No business would accept that. Once the target is hit, you don’t go back and squeeze out more.
Sarkar: GA is important, but the mindset is changing to look at power as a design metric. Power begins early in the design process. We’re starting to see power regression now. They track the power to the early RTL to see how much power is being used for all the different vectors. If it doesn’t go down significantly, you will be dealing with it at tapeout. That requires a mindset change. Even if you’re within the power budget, you don’t want to go back and have to fix it.
Whitfield: People are willing to squint a little bit at power. They’re not willing to squint at functional bugs.

LPHP: Will they pay more for a chip that’s lower power if two functionally identical chips show up on the market at the same time?
Gebara: Absolutely if it’s for handhelds.
Whitfield: If you have to charge your cell phone every day, that’s become the norm. If you can get one day more, people are going to pay for it.
Sarkar: That’s one aspect of it. But if it’s too hot and you can’t hold a device, you’re not even going to get to market.
Gebara: Power is important and people will pay for it. But it becomes part of TCO (total cost of ownership), along with performance.

LPHP: But if you get more efficiency out of a chip for a phone, a manufacturer might just decide to reduce the battery size to cut the cost, right?
Whitfield: If you can hit the peak performance requirements in a phone, that’s essential. You may not do it that often, but when it’s necessary you have to be able to hit it. You’re designing across a few things. You have to have a day of charge at a minimum. You have to hit peak performance. And you’ve got graphics that are a huge amount of silicon area. It’s not about megahertz. It’s getting all the functionality in a fixed power budget. That power budget never changes from one generation to the next, but you’re putting more and more functionality in.
Sarkar: Even in the mobile space it’s being segmented by smart phone, low-end smart phone and tablet.
Whitfield: That’s more focused on cost, not the power budget. The actual power budgets are very similar between high-end smart phones and feature phones. But you may not have as many functions in a feature phone, so you don’t need the high-end processor.

LPHP: How far can we push the technology? How much power can we really save?
Whitfield: Not a huge amount more. The nominal voltage isn’t changing much. There is very little headroom. You need different materials and devices to make significant changes.
Pan: If you want good performance, the voltage is already low. If you want to go to near-threshold or sub-threshold, that won’t work for the enterprise or even cell phones. It may work for sensors. There are opportunities with software and new materials, though. And finFETs do cut leakage quite a lot. How much further can you push when the power density is going up?
Whitfield: SOI has been very interesting for low voltage. We may see finFETs on SOI, too.

LPHP: What’s going on in servers?
Gebara:. The power will still be tens or hundreds of watts, but there are a couple of technologies that look interesting. One involves bipolar devices. Features are getting so small that the power is not much worse in bipolar than CMOS, and you get better performance, so you’ll see biCMOS for performance-critical applications. Graphene looks very interesting, too. Beyond that, without a change to CMOS there won’t be much benefit. The benefit will come from software where people are doing really good kernel development and really good applications. You’ll start seeing in the App Store, if your application eats up the battery you’ll be out. A lot of that is getting cleaned up. That’s where the biggest change is coming.
Pan: We have a microelectronics electronics research center looking at graphene and nanophotonics.

LPHP: How will graphene be used?
Gebara: It’s very low power. Now it’s a question is whether we can make a million of them. You’ll see that used in nanowires, transistors and a lot of other areas.

LPHP: Will the tools just evolve with whatever is necessary?
Sarkar: Yes. Even if you look at 3D-IC, our tools have evolved to analyze what’s outside the chip. If you put a voltage regulator on a chip, the voltage will drop because it cannot scale as high as an off-chip regulator. Our tools have evolved to model and simulate this environment. Even thermal effects can be modeled. We have to make sure our customers are aware of it and simulate it.

LPHP: With more third-party IP being used in chips, is there a premium being placed on lower-power options and are the characterizations that important?
Whitfield: It’s incredibly important. It’s the deployment of that IP that is critical. You can go all the way to RTL, and there are so many ways that power estimates can do a really good job or really mess up. It’s the deployment of the physical and system implementation that’s important. So, the question becomes whether there is a cost to implementing these power features. If we can resolve this with EDA companies so the solution is there, with no barriers, that’s what we’re aiming for. It’s not just a CPU solution, either. It’s CPU and GPU with coherency so you’re not hammering the DDRs all the time.
Gebara: Coherency and legacy is what stopped us from being a competitor of ARM. It wasn’t easy for us to say we have a coherency bus and the bus was never intended for a certain function. The one thing we can’t tolerate in the field is a functional barrier.
Whitfield: We started with coherency in our multicore processors. The cores were
designed ground up to deal with coherency.


Skriptimize says:


Leave a Reply

(Note: This name will be displayed publicly)