The cloud continues to shake up the nature of the datacenter. Low-power processors are ready to find applicability.
By Ann Steffora Mutschler
Without a doubt, the cloud has and continues to change the nature of the datacenter, particularly the requirements the infrastructure has to deliver.
Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel, noted during a Webcast last week, “The infrastructure must change in support of cloud-based services.” Instead of the traditional, high power, high performance CPUs that have long run datacenters, Intel has identified some areas that can utilize low-power processors such as cold storage, microservers and entry-level networking devices. To support this, the company announced a new Atom chipset family to target these application areas.
While Intel arguably is a newer entrant to the low-power datacenter processor playing field, ARM has been plugging away here since about 2010.
Lakshmi Mandyam, director of server systems and ecosystems at ARM, explained with datacenter power consumption growing by 63% last year alone, and datacenter footprints growing by 19%, “you think about the resource-constrained environment that this world is going through right now. We think the energy consumption problem is only going to get worse, not better.”
In this vein, she said, ARM has been working with partners such as Marvell, Applied Micro, Calxeda, AMD and others to gain traction in cloud and hyper-scale deployments, storage and networking to support a new way of thinking about servers and infrastructure. “If you look at workload migration, the feeling is that the majority of the workloads will be deployed in the cloud over the next two to three years, and when you get to cloud deployment a couple of things happen. Number one, the underlying hardware gets abstracted so people are deploying things using virtualization technologies, using infrastructure like PHP and other things where the underlying process is abstracted. And what you are seeing from that is the workloads are all so different. Clearly one size does not fit all.”
Also, Mandyam said, networking-related applications, for example, are more I/O-intensive while block-storage applications are I/O-intensive and not necessarily compute intensive. “From a hyperscale or a large-scale deployment perspective you actually want more balanced performance and you want to utilize every single aspect that you have in the datacenter.”
When it comes to developing IP for low-power processors aimed at the datacenter, users are looking for a number of features to support microserver development, said Ron DiGiuseppe, strategic marketing manager in the DesignWare solutions group at Synopsys. Specifically, users are looking for low latency, high performance, low power, as well as 24/7/365 operation. So reliability, availability and serviceability (RAS) are extremely important, along with support for advanced protocols like DDR4. In addition, there is now demand for advanced manufacturing technology support, i.e., 20nm and below.
Security, of course, is an enormous concern as well. As Bernard Murphy, chief technical officer at Atrenta pointed out, low power plus security in the context of cloud computing introduces some new challenges and some counterintuitive outcomes. “Thinking first about security, there are two ways to attack a process in the context of the cloud. One is looking at the code or the execution, and one is looking at memory activity. If I look at the code or the execution, in fact in both cases, there is a lot of discussion now about something called side-channel attacks. Side channel attacks look to infer what’s going on by rather than looking at the data itself. They look at some manifestation of the data in another form. So you look at the power consumed by the device or you look at EM radiation or you look at process timing.”
It would seem that there is so much noise in the information that it’s impossible to pull any real information out of it. That turns out not to be the case, however.
“You can do fairly sophisticated statistical analysis over enough samples and you can do things like extract encryption keys,” Murphy said. “In the case of code or execution, what you are doing is looking at the power, so you put a little current monitor around those VDD pins or pins to a chip to monitor the power. What you’re going to see mostly is that the baseline is going to be fairly stable but you see variation on that signal. You do statistical analysis on that variation down to the cycle level or even the sub-cycle level and from that you can extract keys.”
There’s more, too. On the memory side, he continued, “especially if you are looking at cache memory, if you look at a piece of information in cache that is already resident in cache, it takes a certain amount of time or a certain amount of power to access that. If it’s not resident in cache and you have to go out to main memory, the time or the power is significantly different to access that. Again, that signal, that trace of timing can be used to essentially crack the key.”
So what’s the implication on low power? It turns out that all the clever techniques for reducing power—especially clock gating—actually amplify the signal, which makes it even easier to hack the trace, Murphy pointed out. “The challenge then is with an encryption core, it’s actually a very bad idea to do clock gating on that because you’ll make it easier to hack. The same is true if you are running software just on a CPU, so if your CPU has clock gating you going to make it easier to hack. The implication of that, for power monitoring from a code or execution point of view, is that security drives power the other way—that drives it back up.”
In terms of low-power processors in the dat center over time, ARM’s Mandyam asserted, “Their importance is only going to continue to grow, and today’s high power becomes tomorrow’s low power with improvements in technology and performance requirements. The way people think about datacenter processing elements is going to evolve over the next three to five years in the sense that they are not going to think about it the way they do today in terms of saying, ‘Oh it’s a 1P or 2P processor and there’s this whole microserver category.’”
She believes the ‘microserver’ moniker many are trying to box around low power processing is limiting because if the applicability of workload optimized compute elements today is anywhere from 15% to 20%, over time, especially with ARM 64 bit hitting volume along with other vendor solutions hitting the market, that applicability will start to grow to 30% to 40%.
“People are rethinking and redesigning the way they think of architecture,” Mandyam concluded.