Mythbusters: Moore’s Law, Low Power And The Future Of Chip Design

Things aren’t as dire as you’d expect given the sound bites, the stories of woe and nearly 20 months of economic bad news.

popularity

By Ed Sperling
Contrary to popular belief, Moore’s Law is not in serious trouble. Nor will active power in most devices be reduced to the millivolt or microvolt level anytime in the near future. And chip design will not disappear, be relegated to the push of a button or move offshore from one low-cost wage location to the next until ultimately it gets to a place where no one is paid a salary.

These myths, and the hysteria that feeds on them—particularly in a competitive global market where economies rise and fall for reasons of their own and recover at different rates—tend to grow with each node. What follows is a look at what’s changed, what will continue to change, and what will not change—the result of dozens of interviews conducted by Low-Power Design, sifting through dozens more presentations by leading executives and technologists, and a hard look at the economic data and earnings reports from still dozens more of the major players in the electronics world.

Moore’s Law:
When Gordon Moore wrote a paper about doubling the number of transistors that can be crammed onto a piece of silicon every 18 months or two months—he actually used the word “crammed”—most people paid no attention to it. Well, it turns out, for better or worse, Moore was right. But that doesn’t mean everyone is following Moore to the letter. And it doesn’t mean that options for sticking to the timetable won’t persist for the next decade or so.

In fact, Intel’s current road map now runs down to 11 nanometers. It may go lower, but that’s what Intel has talked about publicly. IBM’s current road map extends all the way to 3nm. Moreover, IBM is developing that technology with the help of many of the major players in the electronics industry.

That world includes nanowires and carbon electronics—basically the nano revolution that has been promised for the better part of a decade—but at least some of that technology is already well under development. Sources say Intel is currently working on 11nm technology, and IBM publicly says it is working on 15nm technology, which includes 3D transistors, or FinFETS, air gap interconnect technology, 3D packaging and computational scaling—basically engineering a mask because extreme ultraviolet lithographic technology is unavailable.

To the naked eye this computational scaling data looks like a morass of unintelligible gobbledygook, but to a foundry’s process technology it all makes perfect sense. And it gets rid of the need for highly restrictive design rules, double patterning and limited innovation in developing chips. IBM publicly announced it had teamed up with Mentor Graphics and Toppan Printing last year to develop CS technology.

That doesn’t mean every vendor will follow Moore’s Law, of course—at least not literally. Node skipping is rampant because the cost of developing new chips is expensive—and will remain expensive largely because the amount of time necessary to verify a chip hasn’t changed. That’s a business decision, though, made by companies looking at an increasingly fragmented market. TSMC has begun offering high k/metal gate and polysilicon oxynitride versions of chips at the same node for different markets—those driven by performance and/or low power and those that are extremely price sensitive.

Nor does it mean that verification is falling behind the pace of chip design at each node. The fact that more complex chips can be turned out in the same time frame is roughly the equivalent of the argument that the price per transistor has gone down at each node. The tools are working just fine. What they’re not doing is progressing faster than the complexity is increasing. Low-power designs with multiple power domains are much more difficult to verify, but they do offer huge power savings.

Lower power
But just how low is low power? There are, indeed, ultra-low power devices under development that can scavenge enough energy from vibration in a road or bridge to send wireless signals to a central processor for a single duty cycle. But for most mobile applications, there are practical limits.

Why? Because electronic components require a certain amount of charge to work at all. A gate leaks below a certain voltage and so does memory. Percy Gilbert, vice president of silicon technology at IBM’s semiconductor R&D center in Fishkill, N.Y., said the practical limit is about .6 volts for most mobile applications.

“Most devices will remain operational down to .6 volts,” Gilbert said. “But for performance reasons, most folks will not go below .9 volts.”

Given the fact that battery technology is stalled and making only modest progress—mostly in the speed of charge and far less in how much energy can be stored—the bulk of the savings will have to come from the chip’s design and the software that runs on it. Unfortunately, while the design possibilities are relatively well understood, chipmakers are reluctant to design chips with multiple power islands because the risk of something going wrong rises with each new power island. (See Experts At The Table: Building A Better Mousetrap) Moreover, the complexity of verifying the chip goes up significantly because the islands have to be verified in every state and possible configuration.

What has changed the power scaling dynamics most significantly in the past few years is the advent of high k/metal gate technology. By limiting leakage, the benefits of classical scaling that stalled at 90nm are showing up again at 32nm and below—offering both performance and/or power improvements.

The economics of chips
Finally, at least some of the hesitancy to implement new power-saving techniques and rush to the bleeding edge of Moore’s Law has to do with economics. Chipmaking has always been cyclical, even if the bubbles and pops have been far less noticeable in the design world. But with more and more of the market tied to consumer spending habits rather than more predictable corporate technology refresh rates, not to mention the proliferation of advanced technology development around the globe, sometimes it looks as if chipmakers are acting like deer in the headlights.

Much of that will change as tapeouts move into production and foundries begin recovering. While many designers in the United States and Europe complain that jobs have moved to lower-wage areas in Asia, engineers in places like China complain the more lucrative jobs are still in places like the United States and Europe. The cost difference between a fully trained design engineer in all of these locations used to be a factor of 20. The gap has closed significantly in the past five years, and in specialties like chip architecture or power modeling the differences are non-existent.

All of the big chip companies are global operations, and the problems that chip companies are encountering on a regular basis require all the best minds on the planet to solve. This is tough stuff, to be sure, but so far no one is talking about tossing in the towel.



Leave a Reply


(Note: This name will be displayed publicly)