Greener Data Centers

Analysis: Why corporations are following the energy-saving approaches of consumer electronics.

popularity

By Ed Sperling

For decades the race inside the data center was all about performance. If you upgraded from an IBM Series/370 mainframe to a Series/380 your applications ran faster. And if you upgraded your PC server from a Pentium II to a Pentium 4 you got significantly better performance.

The race now is to reduce the number of servers altogether, to lower the cooling costs per server rack, and to utilize the servers that are running more effectively. Performance is a “nice to have,” but power reduction is a “must have.”

What’s changed in the thinking of data centers and why are server-class electronics now being subject to the same kinds of power-saving concerns as portable battery devices? There are a number of factors to consider, and all of them are converging at the same point.

A messy legacy

To understand the problem inside data centers requires some history—as much as six decades worth in many large companies. Data centers in many ways look like geological striations. While new technology runs many of the most advanced applications, there are still old, assembly-code mainframes and even minicomputers still churning cycles each day. In many cases no one knows what’s even running on those computers. But at the risk that it could be important—or worse, that something else might be affected that is known to be important—the fear of turning off these machines is palpable.

2423PH2044

Figure 1: IBM’s S/360, circa 1964 (Source: IBM)

Large corporations have been systematically looking through the data on these machines and others over the past several years in an effort to get this old stuff out of the data center. It takes up expensive real estate, uses an enormous amount of power—no one even thought about power as an issue when these machines were installed—and requires expensive cooling because the average data center runs at about 70 to 72 degrees Farenheit. The only good news was that early mainframes used water for cooling instead of air, which was much more energy-efficient.

Minicomputers entered the mix in the 1980s as a less-expensive but air-cooled approach. Those computers are still in use in many companies alongside mainframes that pre-date them. Ken Olsen, the founder and CEO of the former Digital Equipment Corp. (bought by Compaq and later absorbed by HP) is famous for saying that in minicomputers there would be no plumbers. While that made it easy to move around the machines, it also paved the way for more expensive cooling since then.

800px-Pdp7-oslo-2005

Figure 2: DEC PDP-7 (Source: Wikipedia)

By the 1990s, commodity servers using primarily Intel processors began replacing mainframes. Even IBM and Hewlett-Packard began selling Intel-based machines, usually in the form of blades that could be placed more closely together in a rack. And they were so cheap that business units could afford to use dedicated servers for their individual applications, create their own customized processes and finally put decisionmaking closer to the customer.

That was the argument, at least, and it was considered the best practice at the time. After 20 years, however, some companies accumulated hundreds of thousands of these servers, often running only one application with utilization rates as low as 5%. And because they were air-cooled, often with raised floor construction that cooled from the bottom instead of the top—heat rises, of course—the cooled air had to be run almost constantly and often ineffectively.

Virtualization and clouds

Virtualization has been touted by Intel over the past half-decade as the ultimate solution to server sprawl. Rather than run one application per machine, many applications could be run using virtual machines. While the concept was new for PC servers, the technology was invented by IBM back in the 1960s and employed in mainframes for decades.

Virtualization also works particularly well with multicore chips. And because it’s impossible to keep cranking up the clock frequency on processors without melting the chip, it’s now a requirement that all new chips have multiple cores. But only database, graphics, some scientific applications and some EDA tools have effectively been able to parse functions across multiple cores. The vast majority can use a maximum of two cores effectively, which creates a business issue for chipmakers. If they can’t figure out a way to use all those cores, there’s no reason to buy new chips.

Virtualization was resurrected as the ultimate solution for that problem. By adding hypervisors to manage the applications running on a single core, and by dynamically scheduling those applications to run on available cores instead of dedicating cores to applications, a system can conserve huge amounts of energy. Old mainframes used this approach primarily to utilize compute resources, but power consumption is the new competitive weapon.

Cloud computing—which is basically used to clean up data centers, often with a virtualized approach to running applications—is another term that has been overhyped in the data center. It generally means outsourcing, although in many companies at least part of the cloud is inside their data center and dedicated for their operation. That turns the IT department into a business unit that can create its own profit-and-loss center and keep track of the overall costs.

Intel’s latest research, which is expected to start showing up in servers made by other companies over the next several years, is to build a cloud on a single chip. (See Figure 3) By adding enough cores—48 is the current number tested by Intel—there is no reason to ever go off the chip. Intel believes the total server power consumption at that point could be measured in less than 125 watts when fully utilized.

scc-h-rack

Figure 3: Intel’s prototype for a cloud on a chip. (Source: Intel)

What this does, in effect, is bring the resources used in a computer down to the chip level instead of between machines. At that point, the challenge of getting computers to talk to each other and to shift resources will be significantly confined and power consumption will become a much more localized problem.

To some extent, this is no different than what has been happening in smart phones. When cores are not in use they go into various sleep modes. It doesn’t matter, for example, if a game takes a couple seconds to boot, while it is essential that the phone function be always on and ready to work.

The same type of control can be applied to data centers. A search of old data, for example, can stand a wait of several seconds, while a transaction from a customer must be instantaneous. Running a payroll application likewise can stand behind a more critical function in a data center, such as blocking a possible security breach.

This type of scheduling on a single machine, let alone clusters of machines, is a new concept, however. In the mainframe and minicomputer days, all resources were managed locally. In the PC world, particularly those connected to the Internet, management can be centralized for a global corporation. But in the new model, it also can be centralized on a single machine once again with enough processing power and low enough power requirements to significantly cut costs while also maintaining at least the same performance—even if applications cannot utilize multiple cores.

At that point, it may be more a matter of scheduling priority—and in some cases, paying for that priority access even within a company—than how fast the machines are running. After decades of arguing for centralized control as the most efficient way of using resources, many data center managers are finding it’s also the most efficient way to use power.

Does that sound familiar?



Leave a Reply


(Note: This name will be displayed publicly)