Hyperscaling The Data Center

The enterprise no longer is at the center of the IT universe. An extreme economic shift has chipmakers focused on hyperscale clouds.

popularity

Enterprise data centers increasingly will look and behave more like slimmed-down versions of hyperscale data centers as chipmakers and other suppliers adapt systems developed for their biggest customers to in-house IT faciilities.

The new chips and infrastructure that will serve as building blocks in these facilities will be more power-efficient, make better use of space and generate less heat. They also will be capable of managing enormous and unpredictable data streams, and they will integrate more easily with cloud platforms that have become part of nearly every organization’s extended IT infrastructure.

Unlike in the past, however, they increasingly are not being designed specifically for IT managers. In the past, IT data center budgets gave them the spending power to remain the top consumers of servers and storage. But over the past five years, their spending has been dwarfed by companies like Amazon, Microsoft, IBM Cloud and Google. In fact, the volume of capital spending by these companies is so high that it is beginning to tilt the whole industry in the direction of the largest Cloud Service Providers (CSP).

Google announced last March, for example, that it had spent $30 billion during the previous three years to expand its network of data centers beyond the 15 it already built to support its consumer-services business. Microsoft and Amazon each spend more than $10 billion per year on data center infrastructure, as well.

That’s not all bad, however. Cloud infrastructure, whether private or public, is viewed as the best approach for dealing with huge and growing amounts of data, as well as fluctuations and inconsistencies in that volume.

“I don’t think they’ve quite reached the point where Google is changing the way the whole market works, but the level of computing power is growing much faster than in the traditional sector – exponentially, really,” said Jeroen Dorgelo, director of strategy for Marvell’s Storage Group. “With other CSPs, they have the scale to really change the rules of the game.”


Fig. 1: Worldwide market share for cloud providers. Source: Synergy Research Group

The general consensus is this already is beginning to happen.

“The increasing dominance of hyperscale players continues to play out, with all four leading companies having cause to celebrate,” Synergy analyst John Dinsdale wrote in a July report. That report detailed dramatic ongoing growth and an increasing concentration of market share among the cloud leaders.

Looked at from another angle, spending on traditional data center infrastructure dropped 18% between 2015 and 2017, while spending on infrastructure products for the public cloud rose 35%, according to a September report from Synergy Research Group. Much of that change reflects and ongoing migration of enterprises to the cloud, according to a September report from 451 Research, which predicts the percentage of workloads running on public-cloud platforms will rise to 60% by 2019, compared with 45% today.

For the tech industry, this is a potential bonanza. Even existing workloads that migrate to the cloud need CPUs, memory, storage and all the other resources they’d use in an on-premise data center. The data centers to which they’re moving are still being built at a furious pace, which is filling out the coffers of companies in real estate, construction, HVAC, utilities, and a variety of chips, from memory to processors and accelerators.


Fig. 2: Data center infrastructure. Source: Synergy Research Group

During 2016, collectively, Amazon, Microsoft and Google parent company Alphabet spent $31.54 billion in capital costs and leases, up 22% from the year before, mostly to expand and equip their network of data centers, according to an April 2107 story in the Wall Street Journal. Each of the three expenditures ranged between between $10 billion and $12 billion on capital costs. But that’s still only a piece of the market. Los Angeles-based CBRE identified another $45 billion in investment funds from a range of companies flowing into the data center market – more than 50% showing up since the start of 2016.

The total count of hyperscale data centers worldwide hit 300 in December according to Synergy, which predicts the count will hit 400 by the end of 2018.


Fig. 3: Growth in hyperscale spending. Source: Synergy Research Group.

That growth disrupts existing markets for just about everything involved in data centers, said Synergy’s Dinsdale. Hyperscale data centers have such an appetite for servers, for example, that most design and build their own using contract factories in Asia. That has made original design manufacturers the largest slice of market share of any data center server vendor, at 22.6% and $3.3 billion, compared to 21.3% for HPE and 17.7% for Dell, according to IDC.

Amazon alone sold itself enough servers to net a 10% share of all of servers sold during the quarter.

This also provides an opening in a market that was dominated for nearly 80 years by two big players, IBM for the 40 years, and then by Intel for nearly an equal period of time after that.

“After decades of limited choice, cloud technologies have offered enterprise IT more choices in how they deliver internal and external services, resulting in the delivery of IT services undergoing almost continuous change,” said Jeff Chu, Arm‘s director of enterprise solutions. “A greater “investment” has been made by those deploying their own cloud. Witness the growth of OpenStack with significant resource investments by the likes of Walmart and ATT. Along with large scale deployments comes the opportunity to deliver racks of hardware for delivering different functions. The natural progress from there is more specialized hardware to drive efficiencies based on cost, performance, power, or really all of them. We are starting to see this with some of the networking functions and with the recent trends with FPGAs. Other accelerators broader adoption should follow.”

Arm has been trying to gain a foothold in this market for the past half decade, but it was the push by the big cloud companies that really opened the door.

“It was a kind of kick in the pants that Google and other hyperscalers said, ‘You’re not giving us what we need, we’re going to build our own servers,’“ said IDC analyst Shane Rau. “When you go for that level of scale and density with the low latency Google needs, you have to be able to design your servers for space and energy use and ask for modifications on Intel chips. But you also have them taking off to do their own thing like Google’s TPU so they can build the most powerful service they can to compete for business, which is an interesting extra twist.”

IDC estimates that spending on public cloud services will rise from just under $70 billion in 2015 to $128 billion in 2017, increasing to $266 billion in 2021.

The server shuffle
It’s important to remember that only about a third of new servers go into the public cloud because hyperscale hasn’t taken over yet, according to Linley Gwenapp, president and principal analyst at The Linley Group.

“Everyone talks about cloud, but two thirds of servers go into either private cloud or traditional data centers,” Gwenapp said. “Even the super seven, the hyperscalers, they’re big, but that’s not the whole cloud market by any stretch.”

More than half of IT infrastructure spending (52.4%) still comes from traditional data centers, according to an Oct. 5 IDC report, which still makes them a reasonable source of business for both chipmakers and server OEMs. But that slim majority represents a drop of 6.8% in total spending for the second quarter of this year compared to last. During the same period, spending on public and private cloud infrastructure rose 25.8%.

Deloitte predicts that spending on IT-as-a-service will make up more than half of all IT spending by 2022, and will top $547 billion by the end of 2018. And a Cisco report estimated hyperscale data centers accounted for 34% of Internet traffic in 2015, and that their volume would quintuple by 2020, when they would account for 53% of Internet traffic, would house 47% of all installed data servers, 57% of all data center data and provide 68% of all data center processing power.


Fig. 4: IT vs. cloud deployments. Source: IDC

But hyperscale data centers, however resource-rich, won’t be a huge benefit to everyone. A small number will be able to take advantage of the scale-out and scale-up capabilities of a Google, or a build-your-own-virtual-data center of Amazon and make good use of both the high bandwidth and local reach of any of the top four. A significant number of companies will take small benefits from the variety of services available, but do little more than migrate apps they’re already using.

A large middle tier of organizations will take lessons learned from hyperscale architecture and apply them to traditional data centers. Not many will make significant efforts to emulate Google or Amazon, which are expanding their platforms and developing new services as fast as possible. The goal of the big cloud providers is primarily to be more competitive with each other, not to attract masses of customers with AI or deep-neural-network services for which they are not likely to be prepared, Gwenapp said.

Hyperscale attracts talent as well as money
“The interesting thing is that a lot of the really top computer architects in the world, who used to always be at Intel or IBM or HP, are now at these hyperscale companies,” said Craig Hampel, chief scientist at Rambus. “And a lot of the exciting emerging applications – real time voice translation – will happen in those data centers, too.”

The new approaches being developed at heavily funded, oversized cloud/data center companies already have changed the standard for convenience, security, capacity and other measures that prove you don’t have to be stodgy and reactionary to build levels of performance that can match the five-nines of traditional data centers, Hampel said.

“We’re seeing a lot of benefits of scale and economics and scalability, but the benefits are very specific to the situation. At some point you have to think the bigger risk might be for an IT person to maintain their own data center,” Hampel said

Most of the hyperscale data center companies would be interested in a processer option that lets them “get away from whatever Intel is charging them for the next generation,” Gwenapp said, but Intel seems to have been on top of the potential of hyperscale almost before the companies building them.

Intel embraced the idea of DIY data center server manufacturing early on by agreeing to customize even some of its newest, most expensive chips to improve performance for the hyperscale data center companies, whom data center-group chief Diane Bryant called the “Super 7” (Facebook, Google, Microsoft, Amazon, Baidu, Alibaba, and Tencent).

HPE CEO Meg Whitman, on the other hand, complained publicly at seeing “significantly lower demand” in March, when Microsoft started rolling its own servers. Microsoft’s revenue from Azure rose 93% during the same quarter it told Whitman about its new hardware plans. HPE announced in October that it would quit that part of its server business.

Most cloud providers will buy from the same chip vendors as everyone else, and will have to be concerned about server density and power, but might save money by buying chips from the middle of the quality stack rather than the top or experimenting with new accelerators, new packaging or configuration of memory or other things for which the hyperscale has no time.

The one place hyperscale is an unquestioned advantage is in development and training of artificial intelligence or machine-learning applications, which require huge streams of data and hundreds of thousands of servers.

“It would probably not be cheap to pay for that many servers in the cloud, and you might save money if you had the ability to build out a training infrastructure in your own database,” Gwenapp said. “But the technology is so new that providers are trying to get themselves up to speed. The number of customers in data centers that have enough knowledge of how to do it at scale isn’t large enough to be practical.”

It is sometimes difficult to know how to approach a customer that is sometimes also a manufacturer, and the requirements of hyperscale companies are sometimes so different from other customers that it’s hard to adapt a piece that’s a little too customized, according to Dogelo.

“The issue is that they want to commoditize it all – to buy resources, like NAND, and then build intelligence into the network and into the software, not the value from the provider,” Dorgelo said.

And it’s not always clear which architecture is optimal for a particular use case. “It’s not just the workloads. You have to look at where people are doing their computing,” said William Dally, chief scientist at Nvidia. “If you can move a workload to the cloud, that’s best in terms of sharing resources. If you have to move it out into an embedded device, which is your other choice, you have to know more about the device and how it will be used. That basically changes the target for designers. Semiconductor companies make core building blocks and systems providers are doing the integration, but you’re doing it on a different target than if you were building a traditional desktop machine or one for a traditional enterprise data center. You’re not building things that go into deskside machines or small server room. You’re building things that go into big data centers and have to behave themselves.”

Related Stories
Rethinking SSDs In Data Centers
A frenzy of activity aims to make solid-state drives faster and more efficient.
Sorting Out Next-Gen Memory
A long list of new memory types is hitting the market, but which ones will be successful isn’t clear yet.
Cloud Computing Chips Changing
As cloud services adoption soars, datacenter chip requirements are evolving.
Chip Advances Play Big Role In Cloud
Semiconductor improvements add up to big savings in power and performance.



1 comments

John Laban says:

Two words missing from feature above “open” & “source” …the most rumperedis two words to hit the data centre industry since it started half a century ago. Watch out for the world’s Telcos big rapid shift to open source data centres during this decade. See openCORD

Leave a Reply


(Note: This name will be displayed publicly)