The Argument For Low Power In The Data Center

Enterprise IT managers are now demanding lower power components. What’s changed, and what does it mean for semiconductor design?

popularity

By Ann Steffora Mutschler

For budgetary and ‘green’ motives, enterprise IT customers are demanding higher energy efficiency from their servers. This ultimately rests on the shoulders of the processor designer as the MPU is a significant source of power usage.

Interestingly, the hidden and ugly truth is that for most data center managers, the cost of electricity for that data center is never put in their budget. As a result, they usually don’t get credit for saving electricity, said Carl Claunch, vice president and distinguished analyst at Gartner. “Typically, the building budget handles the electricity for the entire building and the data center is just swept in there.”

What that means for the data center manager is there may not be a direct incentive for energy efficiency. “Typically, it’s that you’ve reached some wall—you’ve hit a barrier where now you’re going to have to upgrade the electrical service because the total energy you’re using maxes out the wiring you’ve put in place or cooling,” he said.

Often, heat is more of an issue. “Even if you have the total capacity to remove the heat, if you get above a certain amount of energy per rack of servers, you start getting poor reliability because of hot spots. Even though in total you could remove the heat, you’re not doing it efficiently enough across the entire room to not have some machines roasting or melting, dripping slag on the floor – so you have to do something. And typically solutions to those spot problems escalate rapidly as you put in specialized things and now that really does affect your budget, Claunch said.

“These kinds of tactical situations that crop up where you can avoid expense or avoid having to build a new data center through getting more efficiency—that’s a good thing. I think, in general, people would like to do the right thing, even though it’s at a lower priority in terms of affecting your buying behavior. But there is this attitude that says, ‘if everything else is pretty much equal, why don’t I get the thing that uses less energy?’ There is an interest in that, but it doesn’t have a huge effect on people’s selections unless they hit these tactical issues.”

Demand for efficiency

The demand then comes from the engineering part of the server vendors back to the processor designers.

Arvind Narayanan, product marketing manager for Mentor Graphics’ place and route tools, said the costs of electricity and cooling are two key factors that are limiting server farm growth. “Data center energy consumption is escalating each year, and power and cooling cost have increased exponentially in the last decade.”

However, he said, energy costs are not the only reason low power design is critical for the data center. “Power consumption for traditional microprocessor architectures is increasing faster than performance, making it near impossible to keep CPUs from burning themselves up. Consequently, the race for faster clocks has reached a point of diminishing returns, and microprocessor makers have been forced to employ multicore architectures, which are inherently more power-efficient than increased clock speeds, to increase computing throughput.”

In the thick of the processor design, Advanced Micro Devices just announced this week three new members of its six-core Opteron processor family that are meant to address rising demand for balanced systems with increased performance and greater power-efficiency for cloud computing and web serving environments.

John Fruehe, director of business development for AMD servers and workstations, said high efficiency (HE) processors comprise 20% to 25% of the company’s server business. “The energy efficiency angle for our processors has gone from being a small part of the business to a much larger part of the business so much so that we are finding now that we are sub splitting the low power into the mainstream power (HE) and then a very highly efficient product that has an even lower [power consumption].”

“Power consumption and heat go hand in hand. The more you can reduce the power consumption, the more you can reduce the heat that is put out. Also, AMD has done things specifically inside the processor to reduce heat and power, called Cool Core technology which can turn off parts of the processor that are not being used. In current systems today, there is one floating point unit (FPU) for each core. Not a lot of apps are utilizing that FPU very often, so the ability to shut that down if there are no FP instructions coming through saves a lot of power,” he explained.

The need for power efficiency also drove AMD to develop the ability for extremely customized power usage in its processors with the ability to independently throttle each core. “In some other people’s processor, they might be able to turn all the cores down all at the same time to lower clock speeds when utilization drops, or might do it in pairs, but we can have each individual core running at the different clock speed,” he explained. To do this, AMD architected each core to run on its own power plane.

The EDA perspective on power

Sriram Sitaraman, a director in Synopsys’ IT department, said power consumption in a data center is typically in relationship to the number of compute and the amount of storage. The total power consumption in a data center can be reduced by adopting lower power devices – even though issues come with it, such as turnaround time and throughput – so if the datacenter is a highly dynamic environment, the key is to optimize existing equipment and not buy equipment that is going to lie idle, he said.

Further, “a typical software company is not going to do a lot of investment in lower power devices because it directly affects their turnaround time, however, it makes a lot of sense for companies that have seasonal traffic or are working with metered power,” Sitaraman continued and observed that the typical trend in the software industry is to try to optimize the existing resources without additional investment.

Added Synopsys’ Rich Goldman, vice president of corporate marketing and strategic alliances: “We are seeing a huge increase in verification on the designs because they are getting so much larger and the physics effects are so much more related. In order to provide all that verification, [customers] are building very large datacenters, which have huge power issues.”

To tame dynamic power consumption, processor designers use innovative techniques, such as clock gating, special reduced power states, multiple voltage domains, and voltage and frequency scaling, Narayanan noted. “Below 45-nm, leakage power also becomes very significant, sometimes surpassing dynamic power as a percentage of total power consumed. Designers are addressing leakage both with new architectural tricks, such as power gating, as well as new process technologies like low threshold voltage and Hi-K metal gate transistors, which have inherently less leakage.”

Goldman expects to see automation of these techniques to make them easier to use, bolstering the adoption of them. “Instead of trying to find new ones, we’ll apply what we’ve already learned and automate that,” he added.

Overall, AMD’s Fruehe expects to see more focus on lower power processors, which are set to grow at a faster rate than other processors. Also, he predicts more emphasis on driving energy efficiency in processors.

“If you take the analogy of the auto industry, you’re seeing a lot of interest in hybrid cars and more being sold, but you are also seeing energy efficiency in the standard, gasoline-driven cars. People are looking for better overall power efficiency. Whether they get it with buying more high efficiency (HE) products or whether the standard products become that much more energy efficient, either way is a good solution as long as I’m saving power at the end of the day,” Fruehe said.



Leave a Reply


(Note: This name will be displayed publicly)