Experts At The Table: Who Pays For Low Power?

Second of three parts: Customization to improve power and performance; electromigration-aware designs; node migration; power-hungry PHYs; the promise and reality of nanophotonics.


By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss the cost of low power with Fadi Gebara, research staff member for IBM’s Austin Research Lab; David Pan, associate professor in the department of electrical and computer engineering at the University of Texas; Aveek Sarkar, vice president of product engineering and support at Apache Design; and Tim Whitfield, director of engineering for the Hsinchu Design Center at ARM. What follows are excerpts of that conversation.

LPHP: Is customization the ultimate improvement for power and performance?
Sarkar: There is a broad range of solutions.
Whitfield: Even if you talk about customization, that’s what we’ve been all about. You can implement a processor in n different processes and at ‘n’ different design points and get ‘n’ different answers. Customization is the key, and that’s how you pay for it.

LPHP: Isn’t that why the University of Texas has combined computer and electrical engineering?
Pan: Yes. And beside the customization and reconfigurability, you can put these two together through software-hardware co-design.
Sarkar: We’ve seen that before. Years ago we were designing chips with power-limiting cores. But we have been talking about power as a number and we need to focus on power, noise and reliability. Process migration protects you from the other two parts more than power. Some of our customers are talking about how to do a design that is more electromigration-aware because you cannot meet timing and reliability at the same time. The devices are so much faster and pushing so much current through the wire that we are limiting the maximum distance. So can you meet the same power targets at lower technology nodes? These are some of the tradeoffs that we see. And it’s not just the chip. It’s the chip, the package, the board and the whole system. We talk about software and hardware integration but there are also some opportunities for a chip company working with a system integrator to iron out issues and to get cost savings.

LPHP: We’ve been talking about performance and power with an eye on the next node, but is that push to the next node starting to slow down with double patterning and leakage?
Sarkar: There aren’t that many providers left. Only the biggest players are left and they’re still looking ahead.
Whitfield: There was an anomaly at the 20nm node where the cost model didn’t really work. You weren’t seeing enough area scaling and power/performance scaling. When you move to finFETs you’ll see all the leading guys jumping. And possibly when you’re moving to 10nm and 7nm—because finFETs are a one-shot boost—then you’re going to have an equal problem scaling as you did at 20nm.
Gebara: PHY power is ridiculous. At some advanced nodes it’s probably half the power of the chip. People are building DDR buses, SMP buses, and PCI buses.
Sarkar: It hits 50% easily.
Gebara: If you take a 20nm die and you expand it out to 20 millimeters, maybe you can get lower power and integrate it all versus going down to 14nm, or maybe you can start stacking in 3D to change the cost equation. But somewhere it will be a function of power and integration versus how much it will cost you to build it.
Whitfield: That’s exactly the equation, and right now for 3D it doesn’t work.
Gebara: Yes, right now it’s not there.
Whitfield: But it’s an economy of scale thing. If someone invests in it, that could change. We talk about heterogeneous chips where only perhaps your compute element needs to be on 16nm and your I/O might be at 65nm with a low-leakage process, and from generation to generation you just change your compute portion.
Sarkar: With analog you can put more analog on the same interposer.
Pan: One of the key advantages of this heterogeneous approach is that you can use different technology nodes. One of the biggest consumers of power now is the interconnect. Optical makes a lot of sense there. That has been used across the board. Nanophotonics is still in the research stage, but they are making good progress there. The advantage of nanophotonics is that you can have many signals at very low energy.
Gebara: We’re nowhere near on-chip photonics. But you will start seeing photonic connections between chips. So instead of going electrical, what if you can directly connect with optical? You don’t need repeaters and it drives down power. You can build this reliably and do it in a cost-effective manner—it can be done and people have done it, just like 3D-ICs—so it’s just a matter of when it goes prime time.

LPHP: Can the tools handle that kind of analysis?
Sarkar: There are certain advantages in terms of I/O. The way you reduce power is to reduce the supply voltage or you reduce the capacitance. The traditional approach is to match the impedance of all the pieces. When you reduce the supply voltage, the voltage drop fluctuation becomes worse and worse. When we move to photonics, it will be a different model for how that is factored in. We have to analyze it very accurately.

LPHP: One of the big issues with power is margin. How do we solve that?
Gebara: This is where there is a big difference between ARM’s primary customers and IBM’s. ARM doesn’t have to sweat its legacy so much. What ends up happening in our case is we have to maintain our legacy and our performance, and margins start adding up. So what does that mean? The best thing we can start doing is monitoring that very closely. These guys can clock gate or turn down a core. It’s not easy to clear out caches. It wasn’t until POWER 5 where we started putting monitors on a chip to see whether a core was doing anything. ARM has been a lot more aggressive about that than we have. We have to support our legacy and it’s got to work. We’re following them in that sense.
Whitfield: The design margins we’re adding into 20nm and 16nm are large and over time, they will reduce. Increasingly, we have to put live monitors in chips. That’s where people are going more and more. When you look at the Intel stuff, they’re doing a lot of that. We don’t know what our partners are doing, but we’re looking at building more and more silicon monitors and temperature monitors that will allow us to adapt to change.
Gebara: You try to drive out that margin, but if you miss, you can recover from the mix.
Whitfield: Yes, and some of these ideas are good, some are going to miss. Some may take awhile for the design community to embrace. There comes a time when you have to do it, though. We spend a lot on research, and the semiconductor guys are trying to improve their costs and decrease their margin and fix the variability.
Sarkar: We’ve looked at some of the ARM hardening of the cores. The IP guys don’t know how the chip will be designed so they build in some margin. The chip guys don’t know how the package is designed so they build in some margin. And the package guys build in some margin. So we started looking at how you can hand off this kind of information from one part of the chain to another. When we do the analysis, at the next higher level we bring in all the electrical effects from the subsystems. Then we can start doing optimization. The SoC designer gets good models and guidance, and then they can start shaving off one extra layer of routing. And if you’re the package designer, you get a chip-package model and you can shave off some of the package. We see that happening now. As an industry we are becoming more open to sharing some of this data, too.

LPHP: What we’re talking about is squeezing something out of every part of the design cycle. How are universities dealing with this?
Pan: It’s hard to cover everything in one course. But we are using some of the tools from the layout to RTL. When it comes to power, we do have some advantages. We have people from IBM, Intel and AMD from the local industry. That gives our students really good exposure to these issues.

Leave a Reply

(Note: This name will be displayed publicly)