One-On-One: Dark Silicon

Part 1: UC San Diego Michael Taylor talks about why transistor scaling will continue for now—and what will end it.

popularity

Professor Michael Taylor’s research group at UC San Diego is studying ways to exploit dark silicon to optimize circuit designs for energy efficiency. He spoke with Semiconductor Engineering about the post-Dennard scaling regime, energy efficiency from integrated circuits all the way up to data centers, and how the manufacturing side can help. What follows are excerpts of that conversation. (For another take on dark silicon, see related story here.)

SE: What’s your vision for continued transistor scaling?

Taylor: In general, when we talk about Moore’s Law we talk about the cost per transistor or the density of transistors. Things are a bit different now. What’s really driving us to move to the next process generation is energy efficiency. As we shrink the transistors down the transistors get more energy-efficient and then we can do more computation for the power budget. Even though there have been some people saying, ‘Hey, the cost per transistor is possibly going up with new process generations,’ as long as the energy gets better as we shrink, things will continue. If the energy does not get better things will stop pretty quickly, except maybe in some very specific domains like RAM where we are not really focusing on computers and instead are focusing on something like data storage.

SE: Driving the power consumption down as we shrink is actually getting pretty difficult because of leakage concerns, physics of the channel and the threshold voltage is actually not going down as you would want it to. It’s off the trend relative to the feature scaling, right?

Taylor: That’s right. In your article you talked about the Dennard scaling. We are in a post-Dennard regime now. In Dennard scaling the two key knobs to improve energy efficiency were voltage scaling, which required special scaling, so that part we can’t really do very much of any more. But the other key part is capacitive scaling, so that as the transistors are getting smaller then their capacitance is also reducing. When you look at energy as proportional to CV², though we are not improving the V part of the equation, the C part is improving. It’s not quite as much as scaling would imply, but we have gotten improvements to that in the recent past. So in my mind, capacitive scaling is really the key thing that needs to continue for scaling to be useful. Since 65nm, that’s the thing that has really been driving this forward. The other side is there is a lot of research now looking at how to exploit dark transistors and that type of research, like GreenDroid, where we look to find new ways to use transistors that are not always on in order to improve our energy efficiency. So those two dimensions that are going to keep energy scaling hopefully going—capacitive scaling and using dark transistors to do specialization. Do we see the number of transistors that are needed to implement that particular computation?

SE: With capacitive scaling, manufacturers aren’t going to be happy to hear you say how critical that is because it means things like thinner gate dielectrics, lower k intermetal dielectrics, both of which are pretty challenging to do. Do you have a feel for where the limitations of capacitance scaling might be?

Taylor: That is a great point. Materials are one part of capacitive scaling that has really paid off, but the other side is just that the capacitance is scaling down as you make things smaller. Ostensibly they are shrinking things down, going to the next node, in order to make things more dense, but you can also view it as the drive to make capacitance lower and improving energy efficiency.

SE: On the very simple idea that a physically smaller capacitor has a lower capacitance?

Taylor: Exactly, yes.

SE: Let’s look at exploiting dark transistors. It’s not clear how far that will take people because you have the overhead associated with splitting things up into pieces, that you can handle, for instance, with specialized cores. Do you have a feel for the limits of that? You’ve talked in your articles about deus ex machina, rising out of the machine to save us all. 

Taylor: The architects and designers are always hoping that the device people will come in and save the day. As I mentioned in my article there have been some interesting candidates that may solve the problem, but that also suggest that there are other things out there that could be better. But on addressing specialization, it is very challenging to grapple with having such heterogeneous hardware. As you said in the article, it’s difficult to program and difficult to design, but that’s what folks in architecture are researching now. A lot of the research that is going on right now is looking at different facets of exploiting heterogeneity and how to deal with these challenges, so that is quite promising.

Every time we have to update our computational stack to address some problem with physics, everything gets a little more complicated, and of course down at the device level there’s a lot of complexity that has been introduced. Think of all the little gadgets we have added—strained silicon and different dielectrics and so forth—and we still manage the complexity. We just have to learn what the techniques are to manage that complexity. I actually think we have made pretty good strides in addressing it. We’re developing lots of techniques, new kinds of fabrics that give us different properties. A simple example is comparing a GPU to a CPU. The GPU allows us to exploit a totally different kind of computation based on floating point than CPUs are good at. Just having those two things on the silicon makes it much more capable and more energy efficient. And we can imagine that new forms of heterogeneity are also going to appear. If you are thinking in a Moore’s Law kind of timescale, like are we going to solve the problem in two years, probably not. It’s a longer term thing. But there is a lot of promise that we will have it pretty well sorted out in the next 10 years for sure. Still, even if we didn’t figure it out in 10 years, in the history of technology, 10 years is pretty short, right? These aren’t problems that are so hard that we’ll never figure them out. It’s only a matter of when.

SE: Right. And certainly the history of the manufacturing side is littered with things that did eventually emerge, but not as fast as people might have wanted them to.

Taylor: Exactly.

SE: And yet somehow, the industry marches on anyway.

Taylor: Yes, and academics like things to be simple and clean and easy to explain. But industry has shown that we are able to tolerate complexity in all its forms and build ridiculously complicated things that work reliably and do what we want them to do. Even though with heterogeneity the main part of it that is scary is its complexity, we actually have managed to deal with complexity pretty well over time.

SE: Groaning and complaining all the way.

Taylor: Exactly.

SE: You said that designers always hope that manufacturers will come to their rescue with new, better, faster transistors. Is there any way in which the designers might be able to come to the aid of the manufacturers? For instance, on the manufacturing side life would be much easier if you could arrange your transistors in neat rows so your lithography system can print more easily. Are there any kinds of structures that might both help make the design side easier by controlling this complexity, and at the same time simplify the manufacturing challenge?

Taylor: That’s a good question. If you think about the way we implement computing, it’s kind of like a hierarchy of abstraction layers. Maybe physics is at the bottom, and then you have materials science and then you have devices and maybe circuits, and then maybe micro-architecture/architecture all the way up to compilers and programming languages. These kind of changes are getting pushed up from the bottom and you know, each person, at each level, is trying to adapt computation to be more efficient given the constraints that are coming from the bottom.

You mentioned in manufacturing making things more regular. There’s definitely been a segment of researchers who are presupposing that this will happen and that we will end of up with fabrics that we will build circuits out of as opposed to laying them down in customized patterns. There is a lot of exploration coming out of those abstraction layers above. The main thing is just maybe communication where the folks at the device level are saying, ‘Look, this will improve things so much if you are able to handle the complexity or the constraints or whatever that would result from restricting what we can fabricate.’

SE: For instance, being able to have a very regular array of transistors might allow you to make smaller transistors, which has all of the benefits for the designers that come from that, right?

Taylor: That’s right. Then I guess, the question in that particular case would be, ‘If I am making the transistors regular do I give up the energy efficiency that I get from being able to specialize them?’ But I know there is one project … NSF has these relatively large multi-institute grants, typically around $10 million, that are called expeditions. And there is, for example, one expedition that’s called the variability expedition that’s run out of UC San Diego and also a few other schools. They are looking at the problem of having to adapt both the architecture and the software to deal with the variability of the transistors—possibly having reliability issues or variability in the manufacturing, where maybe the threshold voltages are different for different transistors. They are figuring out how to make both the hardware and the software work even if the devices themselves are not able to present as much uniformity as in the past.

SE: Because the historical way of dealing with that has been guard banding. But if the guard bands get too big then they eat all your advantage from scaling in the first place, right?

Taylor: Exactly, so they are trying to replace guard banding with more intelligent adaptive hardware and software that can operate even though a small subset of transistors are working outside the more narrow guard band they have decided to use.



Leave a Reply


(Note: This name will be displayed publicly)