Design techniques, process flavor and power analysis all have their place in the finFET world.
Working with finFETs is a study in contrasts. While leakage is now under control for the first time in several process generations due to the advent of different gate technology, dynamic power density caused by tightly packed transistors and higher clock speeds has become the big issue.
“FinFET technology helps with reducing static/leakage power so when your logic is not active, you can shut down the logic and it doesn’t consume the power,” said Vijay Chobisa, product marketing manager in the emulation division of Mentor Graphics explained. “FinFET is a great technology for that. Dynamic power is when you are using the chip and a lot of activity is going on. How do you control that? Where is the problem coming from? If you look at smart devices, people are not using them just for phone calls or text messaging. They are playing video games, or watching sports. As such, these devices have very high resolution and are very fast. They are high-performance devices, and these activities consume a lot of dynamic power.”
The dynamic power can be reduced, but there must be an understanding of how much power the chip is going to consume in order to even begin looking at the power budget, he said. “Reducing is another aspect, but first you have to know how much power you’re going to consume and you take some actions. You use a bigger battery. You add power rails.”
As many design teams working with finFETs have come to understand, dynamic power isn’t something you can ignore.
“There’s all the normal things around power domains and switching things off,” said Mike Gianfagna, vice president of marketing at eSilicon. “You can get rid of the keep-alive clock and the idle mode, which contribute to dynamic power. We’re seeing more and more switchable domains, and seeing a lot more power domains than we ever did before. That’s helping a lot because while keeping a block on contributes to the leakage power, it turns out that by leaving a block on you’re typically servicing interrupts, you’re servicing status requests, and you are consuming dynamic power.”
Choice of IP is an important consideration here. “Certain IP is better than others,” Gianfagna said. “The switching power is lower, so changing IP sometimes can reduce the dynamic power just because it’s better-designed at the baseline. It’s a better transistor design for switching characteristics. That adds up. So sometimes you change IP and it will improve dynamic power.”
Finding the right IP is critical in this case, but it also may depend on libraries, the dynamic power of one block versus another, and even other IP within the design and how it is configured. This is particularly difficult in complex SoCs at advanced nodes because there is much more IP and many more possible interactions and intersections of signals, thermal effects, switching noise and many other unforeseen problems.
The standard response from engineering teams was to drop the supply voltage to reduce dynamic power, said Navraj Nandra, senior director of marketing for the DesignWare Analog and MSIP Solutions Group at Synopsys.
“But it got to a point where you couldn’t drop the supply voltage down any further because logic libraries and things start switching off,” Nandra said. “Between 1.2 and 1.8 volts is the range where you can do that overall, global reduction of the power supply. What customers are now asking us to do as an IP vendor is to look at ways of switching on and off various blocks within the IP.”
This is controlled through techniques such as clock gating and power gating. “Years ago, at the SoC level, people started using these techniques, but they didn’t really impact the IP side because it wasn’t so impactful at the time,” he said. “Today, we’re asked to do this. In the digital domain, clock gating has become an issue because there are multiple clock domains and you get into clock domain crossings, which is a problem. The idea is that if you implement that it helps get the power down. Also in terms of power gating, you’ve got to build into the IP some ability to manage that. What we’ve done is we’ve got big power switches around the IP and that takes up area, but it maintains the state of the IP that it was in before the power switched off. What this means is that when the power is switched on, your previous state is maintained and you don’t need a refresh, and training cycle to get to that point.”
The whole conversation around dynamic power is now power reduction and wakeup time, he added. “When you go into ‘deep sleep’ you’ve got the longest wakeup time, then you’ve got the other modes including sleep, light sleep. The wakeup time is less. People manage that at the SoC level, and now they are asking us to do it at the IP level. Intel probably has the most aggressive specs for PCI Express in terms of wakeup time— hundreds of microseconds — and that means the IP has to work at full capacity.”
And even then there can be issues. Krishna Balachandran, product marketing director for low power at Cadence, noted that the same techniques that were used for dynamic power control are still being used to a large extent in fFinFET designs, which isn’t significantly different from the RTL down to the GDS level.
From a big picture standpoint, another aspect to this is a realization that power must be optimized all the way through a design. “Power is now an integral cost function in any optimization, whether it’s synthesis or it’s a placement optimization or routing optimization — all those transforms that you have to do,” Balachandran explained. “With placement technology you make moves. You choose a particular state of the design with a bunch of placed instances versus another snapshot of a placement. It used to be based on area and it used to be based on timing. And of course there are proxies for area and timing. Especially for timing, you would use estimates for wire length as proxies for the placement state. Then at the routing stage it becomes actual computations where you calculate the capacitances and resistances and then you figure out how you can optimize that.”
Obviously, when it comes to a power-driven flow, it’s now imperative that not only area and timing are considered as the tradeoffs are made, but power has to be considered in that equation, he continued. “When you make a move, you either accept or reject a move. The next move is for a potentially better placement result versus the one you have. So the synthesis and P&R optimization algorithms are now fundamentally tuned for power.”
Balachandran said transformations already take dynamic power into that equation. “As you are making those transformations it’s weighing that as a factor and making the decisions based on the overall result. If there’s a dynamic power switch that is set, then it considers that. If there is a leakage switch that is set, then it will also consider leakage in that equation. So in a finFET design you can go easier on the leakage, you can move the dial toward the dynamic optimization because the technology already helps you on the leakage side. But if your leakage numbers are not met, you can crank up the leakage optimizations. So there are effort levels you can set, and based on that the algorithms will give it a relative importance of what to optimize more and what to optimize less, and hence give you results that are closer to your target.”
Another aspect of managing dynamic power involves clock speeds. For a long time the emphasis was on the smallest geometry, the fastest switching speed, and raw power. “With heterogeneous designs for multiple processors, it’s now more about what’s the throughput of the chip, can I add multiple cores that are more customized for the need, and can I slow the clock down,” said Gianfagna. “If I can slow the clock down, I’m going to have a really good impact on the power. You’ve got some counterintuitive things going on. If you can get the processing done in a parallel way and slow down the clock, you win, as opposed to having one core screaming fast — where you kind of lose.”
At the end of the day, managing dynamic power may not be a choice of one way over another, but an informed choice based on analysis of the tradeoffs, using a combination of many available methods and technologies, and in some cases a lot of hard work to figure out what works best where—and why.