Designing Systems For Power And Throughput

Computation may be a high priority for performance, but it’s one of the least energy-intensive parts of the overall system-level design.

popularity

 

By Ed Sperling

The most energy being consumed inside of processors is no longer for computation. It’s stuff that’s most chip designers think about after the design is completed, such as communication inside and outside the chip, managing those communications and the power levels across the chip.

Research from Intel Labs, unveiled at the Intel Developer Forum this week, show that for a supercomputer to achieve performance of 1 teraflop—one trillion floating point instructions per second—it now takes 200 watts for communication, 150 watts for memory to feed it, 100 watts for the computation, 100 watts for the external disk, 1,500 watts for control, 950 watts for the power supply and 2,000 watts for heat removal.

These may seem like enormous numbers compared to what are used in even communication base stations, and they’re orders of magnitude higher than many consumer devices. But the ratios are relevant even for consumer devices (minus the heat removal, in most cases), said Nash Palaniswamy, senior manager for throughput computing in Intel’s Data Center Group.

“The commercial world is all about balance,” said Palaniswamy. “You get the maximum you can from multiple cores. If you look back 15 years ago, algorithms could not work across cores, so it made communication impossible. Now we’re able to take advantage of multiple cores.”

At least that’s true in the supercomputing space. In the consumer world, many applications cannot be threaded or parallelized beyond a certain point. Intel has been focusing on a concept called balanced computing, which means that all the pieces in the computer function at the same rate so there are bottlenecks. For example, it doesn’t pay to put in an advanced component just because it’s available if the rest of the device won’t run any faster or better.

John Gustafson, a fellow in Intel Labs, said the new focus is on communication across systems. “It’s painful,” he said. “The cost per use is in the communication, not the wires.”

What’s particularly interesting is this is the way the human body works, Gustafson said. The majority of energy in the brain is spent on communication, not on processing.

“Things like larger cache allow the design to save power because it’s better to stay on chip than go off chip,” he said. “Right now, we’re spending about 10% of the power on communication and 90% on computation. In the future, we’ll be spending 90% on communication and 10% on computation. For all intents and purposes, floating point is now free.”



Leave a Reply


(Note: This name will be displayed publicly)