Moore’s Law vs. Low Power

Cramming more functionality onto a piece of silicon is raising all sorts of power issues that will become far more pressing at each node.


By Ed Sperling

Moore’s Law and low-power engineering are natural-born enemies, and this dissension is becoming more obvious at each new process node as the two forces are pushed closer together.

The basic problem is that shrinking transistors and line widths between wires opens up far more real estate on a chip, which encourages chip architects and marketing chiefs at chipmakers to take advantage of all that extra real estate. But more functionality layered onto a die also increases the demand for power—or makes the development of the chip much more complicated.

One way to deal with all of this is to drop the operating voltage across the chip. But decreasing the supply voltage has its problems.

“If you decrease the supply voltage too much, then circuits don’t work anymore,” said Mark Bohr, an Intel senior fellow and director of process architecture and integration. “There isn’t enough signal-to-noise ratio to make it work. But there’s also no silver bullet for this. One of our ongoing challenges is to scale transistors and operating voltage.”

How to do that is a rather difficult task, however, and engineers and scientists working on the most advanced chips on the planet say it will remain extremely challenging at all future nodes.

“There is a minimum voltage any ‘charge-based device’ can work on,” said Jan Rabaey, professor of electrical engineering and computer sciences at the University of California at Berkeley. “It equals 2 (kT/q) ln(n+1), where n is the subthreshold slope factor of the device. At room temperature and a normal device (n=1.4 – 1.6), this translates to approximately 50 mV. Given the fact that there is margin needed for reliable operation, a practical minimum voltage would be around 100 mV. There are some ways to lower this. High k is not one of them, as the main purpose of that is to reduce gate leakage. More effective is to reduce n (which is 1 for an ideal bipolar device).”

Power modeling, power islands
One solution is power modeling, which is almost required as more power islands are added to a system on chip. The advantage is clear—if the majority of functions on a device can be powered down or even off when they’re not in use—then the amount of power consumed by the chip can be dramatically reduced.

But complexity increases with the addition of power modeling. It’s harder to design, to route traffic and prioritize that traffic.

“Even at the architectural level people are reluctant to use multiple power domains in their design because they don’t want to complicate their system,” said Prasad Subramaniam, vice president of design technology at eSilicon.
”They don’t want to have multiple voltage regulators. A chip already requires two voltages, one for the I/O and one for the core. They don’t want to go beyond that.”

Verification adds another level complication. It’s much, much harder to verify the chip because that verification has to be done utilizing every state of every different function and in every different possible sequence.

“This is major problem,” said Srikanth Jadcherla, Synopsys’ group director for R&D for low-power verification. “But it isn’t a tools problem. The tools for verification are there. It’s a methodology and mindset shift. Engineers are not used to doing regression and debugging in this way. You have to change the whole thing under you.”

This is easier said than done. Tools can be swapped out, and even when there is more training involved that can be a relatively painless step. But changing a methodology is radically different.

“If there are six power domains and on/off nodes, then you have 64 possible combinations (more if there are more states than just on and off). You have to make sure the chip still functions in each state and that you can get out of one state and into the next,” said Jadcherla. “RTL engineers never bothered about system states before. Now they have to know the major states. A smart phone has a phone mode and an e-mail mode and a camera mode, so you now need to do mode-based testing. This is not something we see in the design community yet. Low-power verification must be done in the context of the system.”

New materials, methodologies and technologies—and challenges
At least some of the problems will be dealt with using new materials. While Intel added restrictive design methodologies at 45nm, IBM and AMD changed substrate material from bulk CMOS to partially depleted silicon on insulator (SOI). At 28nm and 22nm, IBM and its ecosystem—which includes AMD—are looking at restrictive design rules and Intel is exploring the possibility of adding fully depleted SOI.

Intel looked into partially depleted SOI technology about the time that IBM did and ruled it out because the cost was too high and the performance benefit based upon that cost was limited. But Bohr said the company is now looking into fully depleted SOI technology. There is no determination whether Intel will use that technology at future nodes, but it remains a possibility.

The difference between partially depleted and fully depleted is that in a fully depleted model the source and drain in a transistor are depleted down to the oxide. The channel is subsequently deeper, which in turn provides better insulation. With SOI, chipmakers typically can get a boost in either performance or power. But with performance now far less of an issue in most applications than power, the bulk, the focus is on SOI to save power.

“SOI technologies have a slope factor of approximately 1.2-1.3,” said Rabaey. “There is currently a lot of research on the development of devices with an ‘n’ smaller than 1 (such as Tunnel-FETs or TFETs, and other hetero-devices). This would allow for lower voltages. Right now this is purely experimental though.”

There is no simple answer to how power issues need to be addressed. The clear implication, however, is that design will become more complicated in some areas even as it becomes simpler in others. Restrictive design rules will limit what design engineers can do, but they will open up all sorts of possibilities for power modeling and engineering that never existed—or needed to exist before.

As IBM’s top engineers have said repeatedly, each new node requires some group to feel the pain. In the past, much of that pain was absorbed in the manufacturing and foundry process. The next phase will hit the design engineer and the verification methodology. After that, it’s anyone’s guess.

Leave a Reply

(Note: This name will be displayed publicly)