Testing the Waters

The greatest power savings in 3D designs are achieved at the architectural level, and that may mean jumping in at the deep end.

popularity

By Ann Steffora Mutschler
Large semiconductor companies are now testing the waters in 3D design to determine how to best leverage the technology for lower power, better performance and additional architectural flexibility. As such, much work is being done to determine how exactly to achieve an optimum 3D design outcome.

3D is almost by definition an architectural approach to power savings with some of the critical items being stack partitioning to minimize wire length and IR drop, observed Mike Gianfagna, vice president of marketing at Atrenta. “These kinds of decisions are indeed architectural in nature when you’re looking at configuring a 3D stack. There is also significant opportunity for power saving by switching parts of the stack off when not in use. The so-called ‘dark silicon’ approach. Optimizing this also requires a good partition between the slices in the stack for maximum on/off efficiency.”

Ridha Hamza, sales and marketing director at Docea Power agreed. “When you have a 3D project, one of the first things you need to do is to see on which dies the different IP will go or the different functions will go. Usually it is quite straightforward—the memories would be on a die and the logic on another, but at the boundaries there are always tradeoffs to make. That has to be done at the architectural level because if you start doing RTL and synthesis, etc., it is already too late. It’s too much work. So the partitioning is a start.”

From a user perspective, Robert Patti, chief technology officer and vice president of design engineering at Tezzaron Semiconductor, explained that the starting point for achieving the greatest power savings in 3D varies, but in general he first determines whether there are specific process separations that can help. For example, if the memory is separated from the logic or the analog is separated from the logic does it allow use of a process that is fundamentally better?

Next is looking at the structure in 3D to improve proximity. “Am I building some structure in 2D, where I end up spreading things apart just because everything wants to be in the same spot? You end up with a lot of congestion. Then I look at how can I move this into 3D space rather than 2D and probably help myself with the congestion. But this also allows me to bring some things closer that I maybe couldn’t do before. This might be how you deal with memory blocks or caches. A lot of times those are big blocks and you would like them in the middle of everything, but you can’t put the cache in the middle of everything because it spreads too much stuff out. But in 3D you can fold it underneath the other circuitry,” he said.

In a separate example, Patti pointed to a microprocessor that could be typical of an Intel or an AMD or an IBM. “A lot of the microprocessor today is dedicated to dealing with the fact that you can’t get to enough memory fast enough. You have relatively limited caches so you’re doing a lot of auto-order execution, a lot of speculative execution. You’re really doing a lot of things that waste significant amounts of power and you’re doing it for marginal improvements in performance. If you now have the capability of putting a couple of gigabytes of memory basically on the processor—and we can do that with our parts, some of them are in that size range—so now you have a processor which has one or two gigabytes of basically cache.”

With these things in mind, processor designers can look at their situations differently because, if they go through and looks at the miss ratio and how fast to get to parts of memory, there’s a lot less desire to waste power doing speculative execution. Instead, either more cores can be run or the cores could be run faster, increasing the benefit of that power.

“If you looked at the guts of what a microprocessor is, a huge percentage of it is related to basically dealing with slow memory—the speculative execution scoreboard and all the garbage that goes with it,” he continued. “If we waived our magic wand and got rid of that because we decided that our new cache is going to be big enough that we’re not going to bother using it, you might be able to double the number of cores on your chip. If you double the number of cores—and the memory, being as big as it is, would seem to be able to support that—you can get significant performance improvement that you’re kind of getting for free. You didn’t change the process technology for the processor, you didn’t make the die bigger, but you’ve now bought yourself anywhere from 15% to 100% more performance, depending on what you were running before.”

While the above example is put in terms of how improving performance, it also can put it in the perspective of how much the power can be improved if the processors are really twice as efficient. It’s a game changer, without a doubt.

Finally, Hamza noted that a big issue that people are looking at today in 3D is how some behavior would be for this new architecture. “There, you need to solve the coupling between power and thermal. For this you even have some margin and you can have a steady state simulation using a numerical solver for first thermal that injects some power and sees how temperature will rise and how much it is raised in different dies, because different dies have different tolerances. Or you don’t have margins and the steady state approach doesn’t work and you have to go into a more dynamic simulation.”

Clearly, while the benefits of 3D are definitive, making the architecture tradeoffs are a looming challenge for engineering teams.



Leave a Reply


(Note: This name will be displayed publicly)