What we really know about the cloud and how we learned about it.
The COVID-19 High Performance Computing Consortium has set records for global cooperation by giant companies, universities and various federal agencies and national laboratories. But it also may have cracked opened a door for much more than that.
Until now, there has been a massive race for dominance in the data center. Big companies have gotten rich on data, building infrastructure at a colossal rate. So far, it appears they haven’t overbuilt. But at some point, when more data is processed on premises rather than in the cloud, that could change. In the 1990s, at the start of the dot-com era, no one thought there could be too much fiber, but most of it sat dark after the 2001 downturn.
It’s not that the rate of data growth will shrink. By all indications, data will continue to balloon for decades. But that data will become more structured, easier to process and store, and it will be utilized as needed much closer to the source. And that opens the door to what could be the first big shift in the cloud model — virtual cobbling together of resources by third parties based upon usage trends in the cloud.
This is like time-sharing on steroids. Or to put an uglier spin on it, this is like buying and selling data processing futures using sophisticated modeling algorithms. Until now, though, no one really had any idea how much compute capacity was available from the largest cloud providers and government agencies. There were projections about overall Internet traffic, but it was difficult to ascertain just how accurate those measurements were.
We now have some hard numbers. The COVID-19 HPC Consortium said it has roughly 402 petaflops of capacity spread across 105,000 nodes and 3.5 million CPUs. And that doesn’t include all of the other giant data centers around the globe. The numbers are still growing as more companies and organizations join in, and for anyone keeping track, they can see how much extra is added through a running tally. That’s a gigantic amount of compute power, and it raises some interesting questions about sustainability of multiple companies in that business, particularly as data becomes more uniform and as quantum computing begins entering the picture.
One of the huge inefficiencies in cloud computing today is that not all data is structured the same way. There is work underway across the industry to change that, adding consistency so that it is collected or at least translated into a more consistent format. That has multiple implications. First, it’s much faster and requires less power to process clean and consistent data. But second, it requires fewer compute resources to process that data if it is structured, and that frees up a lot of CPUs, GPUs, FPGAs, and just about any other processor type, and potentially a lot of storage space.
Quantum computing adds yet another wrinkle into this. While quantum computing isn’t going to show up on your desktop anytime soon, it will be available for hire by those companies that can afford to develop it. And if you want to get something done incredibly fast, thousands of qubits that can last just fractions of a second can do more than months of processing even with 402 petaflops of performance.
At some point the business model for the cloud will open up to third parties as demand begins to flicker, and those third parties will be to draw preferential contracts for processing from wherever it becomes available. The new consortium is a powerhouse of collective compute power, and for the first time we have a glimpse into just how vast that powerhouse really is.
Leave a Reply