Rethinking The Data Center

Total cost of ownership extends way beyond the hardware, and power is now a key piece of the equation.

popularity

Ever since the introduction of the PC, the biggest challenge in computing has been more about getting software to take advantage of multiple processors or cores than getting the chips to run faster. Ironically, this issue was solved decades ago inside of data centers. Enterprise applications, built on databases, have always been relatively easy to parse so that individual pieces can be run separately and then tied back together centrally.

IBM invented symmetric multiprocessing back in the 1960s, and it has been the main method used to accelerate applications. To run it faster, throw more cores or processors at it. To make it go even faster, turn up the clock speed on those processors and add even more of them.

What hasn’t changed significantly, though, is the fundamental flow of data. Faster networking certainly allows it to move around the data center more easily, but how it’s actually handled by memory, faster I/O and how it interfaces with the hardware are collectively the next big bastion of opportunity. This one is driven less by performance, though, than by power considerations. It’s always possible to add more machines and processors, but it’s now costing more to keep them running and to cool them than to buy and maintain them. So fewer is better, particularly if you can maintain the same performance and reliability, both of which are critical for data centers.

There’s been a lot of talk lately about microservers. Five years ago these would have been considered toys and scoffed at by CIOs and data center managers. In fact, one executive used that exact term in a conversation with me several years ago at a low-power conference. But with electricity costs escalating because there are simply too many servers—way, way too many of them, and most of them grossly underutilized—the opportunity for driving down costs through mesh networks of small servers is looking very attractive. And with corporate applications being able to span many smaller machines at once, this is gaining serious attention among server makers, chipmakers and enterprise software makers.

This same phenomenon propelled VMware and Citrix into stars of the software world in the early part of the decade, when server proliferation and underutilization first came under scrutiny. That was just a first step, though. The next step is to build intelligent networks of servers that are flexible, heterogeneous, and which can function as a single unit or as individual pieces of a lot of asynchronous processing jobs.

There is much work to be done, but the opportunity is growing for every watt that is saved.



Leave a Reply


(Note: This name will be displayed publicly)