Data Center Power Poised To Rise

Shift to cloud model has kept power consumption in check, but that benefit may have run its course.

popularity

The big power-saving effort that kept U.S. data-center power consumption low for the past decade may not keep the lid on much longer.

Faced with the possibility that data centers would consume a disastrously large percentage of the world’s power supply, data center owners, and players in the computer, semiconductor, power and cooling industries ramped up effort to improve the efficiency of every aspect of data-center technology. The collective effort was so successful that overall data-center energy consumption rose from 1.5% of all power used in the U.S. in 2007 to just 1.8% in 2016, despite enormous growth in the number of data centers, servers, users and devices involved, according to a 2016 report from the U.S. Dept. of Energy’s Lawrence Berkeley National Laboratory (LBNL).

Power used by U.S.-based data centers rose 90% between 2000 and 2005, but only 24% between 2005 and 2010. After that, it increased only 4% per year, according to the LBNL report, which predicted 4% growth would remain steady through 2020.

It was the cloud, however, not the rising efficiency of all computing devices and cooling systems, that made the biggest difference. Rapid growth in the number of hyperscale data centers shifted workloads into highly efficient hyperscale facilities and away from the comparatively inefficient mix of enterprise data centers, server closets and standalone servers in which they had been running, according to Dale Sartor, a mechanical staff scientist/engineer at LBNL, who worked on both the original 2007 LBNL report and the 2016 follow-up.

The popularity of cloud computing has grown to the point that there were 420 hyperscale data centers worldwide as of a May 2018 report from Synergy Research Group. Approximately 44% of those were in the U.S. as of last December, when the Synergy report was published.


Fig. 1: Hyperscale data center operators. Source: Synergy Research Group

Efficiency by the numbers
Economies of scale, the ability to customize chips, servers, staff and almost every other variable make hyperscale datacenters efficient enough that they routinely register Power Usage Effectiveness (PUE) scores of 1.1, as opposed to the 2016 average of 1.8 for all U.S. data centers, according to the LBNL report. But even 1.8 is a big improvement over 2005, when an Uptime Institute report estimated that the 18 datacenters it examined had installed 2.6 times as much cooling capacity as they needed, were troubled by persistent hot spots in the server room and misconfigured cold-air venting to the point that only 40% of the cold air intended for the server racks actually got there.

Enterprise data centers can’t really compete with the efficiencies of scale possible at more massive facilities, but they can improve their own efficiency as much as 80% by adopting similar tactics and power-saving technologies such as low-power chips and SSDs rather than spinning hard drives, according to Jonathan Koomey, a lecturer at Stanford University and co-author of both the 2007 and 2016 LBNL reports.

“With virtualization and movement to the cloud, not only are your data centers more efficient, utilization is high, so you get more bang to the buck for every server, Sartor said.

With full implementation of all the recommendations, total data center energy use could have been lower by a third, the report said. Every workload moved out of an inefficient data center and into the cloud improved the power efficiency of U.S. data centers overall, but there is only so much running room in any set of improvements, and the ones keeping power use down in the data center market are wearing out.

“You can only consolidate so far and only migrate so many workloads to the cloud, for example,” Sartor said. “So once you’ve reached 100% of what you can consolidate, or as efficient as you can get in how you run workloads, growth in the number of workloads will have more impact than they have to this point.”

It’s very likely data center power use will begin to climb. For one thing, the Industrial and consumer Internet of Things will see the connection of 31 billion devices by the end of this year, according to market researchers at IHSMarkit. The research firm expects there to be as many as 73 billion by 2025.

“We did take IoT into account in our figures for 2016, but most of those devices don’t reside in the data center, so they wouldn’t be included in a measurement specifically of data-center consumption,” Sartor said.

Cryptocurrency madness
None of this accounts for GPU-packed rigs mining Bitcoin and other cryptocurrencies, following the increase in Bitcoin value from $930 in January of 2017 to more than $19,000 in December. The boom started a run on GPUs, whose price went up because they were in such short supply, and created a migratory global population of cryptocurrency miners. That spawned massive, anomalous demand for power in Iceland, Venezuela, and several towns in the American Midwest as they search for the lowest supply cost and highest margins for mining operations.

Crypto-currency mining sucks up so much power, however, that some municipalities and a few small countries are running into trouble keeping up with demand. Households in Iceland use about 700 gigawatts of power annually, where power is free or very low cost because it comes from geysers and other geothermal sources. But crypto miners now pull 140 gigawatts per year more than the average household, making it the first country spending more energy on cryptomining data centers than people, according to ambCrypto.

Collectively, all cryptomining operations worldwide could use more electricity this year than all the electric vehicles on the road. Cryptomining will pull 140 terawatt-hours of electricity in 2018, which is about 0.6 percent of global power use, according to a January Morgan Stanley report.

One Bitcoin, on average, requires 1,037 kilowatt hours. Worldwide efforts at Bitcoin total 71.12 terawatt hours of demand, according to Crypto-mining is not a data-center activity and is not one that can be inexpensively migrated to the cloud. Instead, it is built on blockchains.

It’s reasonable to expect most automation and IoT projects will connect through the data center, and that the datacenter will actually be a cloud owned by Google, Microsoft, Amazon, or one of the other big hyperscale players, according to Geoff Tate, CEO of Flex Logix, who expects quick advancement of new capabilities and custom-designed efficiency eventually will draw 90% of all computing to the cloud.

“The hyperscale companies have reached a scale where they can just say they’re going to do something themselves, even designing their own chips optimized for a particular workload, which is a huge breakthrough,” said Tate. He expects to see cloud providers continue to make steady improvements in speed, flexibility and the efficient use of power and hardware, to a scale other datacenter providers couldn’t consider in the past.

One of the edges Microsoft is pouring resources into is a tubular container with a compact data center inside, which will be connected to the land with a power cable sending half a megawatt of power to the underwater data center once it’s up and running. The undersea data center is just one of a continuing series of updates and changes hyperscale datacenter providers keep making in their products, Tate said.

Changes in data center architecture, power, cooling, location and operations are happening quickly enough to stave off the waning value of LBNL’s recommended improvements.

There is certainly enough investment to consider improvements as works in progress, and very little indication any of the seven largest cloud providers want to do anything but consolidate most computing into the cloud. CapEx investments in new facilities could reach $100 billion during 2018, according to Synergy, compared to $74 billion in 2017, to make sure they have the capacity.

The edge effect
There is a counter-argument about centralization of resources that says it makes more sense to put limited computing facilities near the devices to run latency-sensitive analytics locally and in a timely way. This also shrinks data feeds from thousands of devices into short reports, which then can be merged automatically into databases in the cloud.

What the edge will look like, and what functions will be most common in its list of data-processing tasks, is up in the air, however. Project managers will come up with a slightly different answer to questions about how they analyze or store it and what they send to the cloud all depend on latency sensitivity, cost and security of transmission and a dozen other factors specific to an individual project, according to Steven Woo, distinguished inventor and vice president of enterprise solutions technology at Rambus.
“We will see more of a buildout of these sensors and gateways and high volumes of data and a slow evolution of the use cases on the energy and cost involved,” Woo said.

Some devices — especially smartphones, self-driving vehicles, and other latency-sensitive, high-functioning devices designed to act alone rather than as element in a smart factory — will connect to the cloud directly using LTE or wide-area networks. Most of the others will have a middle layer of hardware as interlocutor that is still largely undefined. It could involve implementations as small as a single device-data-consolidating gateway that sends collective reports to the cloud periodically, to mini data centers that store data and run time-sensitive analytics in facilities close to the devices.

The edge is an inflection point in the evolution of IT architecture that will affect the architecture of the infrastructure connecting IoT and data center, the decision about where data should be stored or analyzed and, indirectly, how much power is being used. That decision will be based on what makes the most sense for the applications, data flow and the cost structure of the company making the decisions about an industrial IoT rollout, according to Anush Mohandass, vice president of marketing and business development at Netspeed Systems.

“Should machine learning application run on devices at the edge or at the core? Those two options look very different on an SoC,” Mohandass said. “The edge is more focused on latency and visual processing. The core is more of a processing analysis engine. There is not just one option, so we are likely to see significant effort in both places.”

The core value of the IoT, especially the industrial IoT, is its ability to make every process more efficient by tracking all the details from each machine, and identify future problems or current inefficiencies from a device-eye-view of a complex operation.


Primary energy use of present-day and cloud-based business software systems by: (a) application and (b) system component. Source: Lawrence Berkeley Laboratory

Related Stories
Turning Down The Power
Why ultra-low power is suddenly an issue for everyone.
Data Centers Turn To New Memories
DDR5, NVDIMMs, SGRAM, 3D XPoint add more options, but solutions may be a mix and much more complex.
Hyperscaling The Data Center
The enterprise no longer is at the center of the IT universe. An extreme economic shift has chipmakers focused on hyperscale clouds.
Rethinking SSDs In Data Centers
A frenzy of activity aims to make solid-state drives faster and more efficient.



1 comments

Bernard Murphy says:

Nice article

Leave a Reply


(Note: This name will be displayed publicly)