Cloud Computing Chips Changing

As cloud services adoption soars, datacenter chip requirements are evolving.

popularity

An explosion in cloud services is making chip design for the server market more challenging, more diverse, and much more competitive.

Unlike datacenter number crunching of the past, the cloud addresses a broad range of applications and data types. So while a server chip architecture may work well for one application, it may not be the optimal choice for another. And the more those tasks become segmented within a cloud operation, the greater that distinction becomes.

This has set off a scramble among chipmakers to position themselves to handle more applications using more configurations. Intel still rules the datacenter—a banner it wrestled away from IBM with the introduction of commodity servers back in the 1990s—but increasingly the x86 architecture is being viewed as just one more option outside of its core number-crunching base. Cloud providers such as Amazon and Google already have started developing their own chip architectures. And ARM has been pushing for a slice of the server market based upon power efficient architectures.

ARM’s push, in particular, is noteworthy because it is starting to gain traction in a number of vendors’ server plans. Microsoft said last month it would use ARM server chips in its Azure cloud business to cut costs. “This seemed like dream just a couple years ago, but a lot of people are putting money into it big time right now,” said Kam Kittrell, product management group director in the Digital & Signoff Group at Cadence. “As time goes on, what we’ll see is that is instead of just having a general-purpose server farm that runs at different frequencies but basically has a different chip in it (depending if it’s high performance or not), you’re going to see a lot of different types of compute farms for the cloud.”

Screen Shot 2017-04-18 at 7.44.08 PM
Fig. 1: ARM-based server rack. Source: ARM

Whether ARM-based servers will succeed just because they use less power than an x86 chip for specific workloads isn’t entirely clear. Unlike consumer devices, which typically run in cycles of a couple years, battles among server vendors tend to move in slow motion—sometimes over a decade or more. But what is certain is that inside large datacenters, power expended for a given workload is a competitive metric. Powering and cooling thousands of server racks is expensive, and the ability to dial power up and down quickly and dynamically can save millions of dollars per year. Already, Google and Nvidia have publicly stated that a different architecture is required for machine learning and neural networking.

In looking at the power performance tradeoffs, and how to target the designs properly, there are two distinct things that cloud has accelerated in both the multicore and networking space. “What is common between these chips is that they are pushing whatever the bleeding edge is of technology, such as 7nm,” Kittrell said. “You’ve got to meet the performance, there’s no question. But you’ve also got to take into account the dynamic power in the design. At 65nm we got used to power being dictated by leakage all the way through 28nm. At 28nm, which was the end of planar transistor, dynamic power became more dominant. So now you’re having to study the workloads on these chips in order to understand the power. Even today, datacenters use 2% of the power in the United States, so they are a humongous consumer. And when it comes to power, it’s not just how much power the chip uses, it’s the HVAC in order to keep the datacenter cool. In essence, you’ve got to keep the dynamic power under target workloads under control, and the area has to be absolutely as small as possible. Once you start replicating these things, it can make a tremendous difference in the cost of the chip overall. The more switching nodes you put in there, the more power it consumes overall.”

Slicing up the datacenter
Changes have been quietly infiltrating datacenters for some time. While there are still racks of servers humming along in most major data centers, a closer look reveals that not all of them are the same. There are rack servers, networking servers, and storage servers, and even within those categories the choices are becoming more granular.

“While there is still a need for enterprise data centers to have a general server/traditional server primarily based on Intel Xeon-core-based processors with a separate NIC card connecting to external networking where the switching and routing occur, we see in these large-scale cloud datacenters that they have a number of specific applications that they feel can be optimized for those applications within the cloud, within that data center,” said Ron DiGiuseppe, senior strategic marketing manager in the Solutions Group at Synopsys.

As an example, DiGiuseppe pointed to Microsoft’s Project Olympus initiative under its Azure business, which defines a server that is targeting different applications such as web services. “Microsoft is large scale — they are estimating that 50% of their data center capacity is allocated to web server applications. And obviously, every cloud data center would be different. But they wanted to have servers that can be optimized for the web applications. They announced last month that they have five different configurations of servers targeting segment-optimized applications.”

Another example would be database services, he said. “This is very fast, and therefore low latency search and indexing for databases, such as for financial applications. With that in mind, the system architectures are being optimized for those applications, and the semiconductor suppliers are architecting their chips to have acceleration capabilities tied to the end applications. Therefore, you could optimize semiconductor adding features to support those different segmented applications.”

That could include a 64-bit ARM processor-based server chip or an Intel Xeon-based server chip in a database services application, where the database access is accelerated by adding very close non-volatile storage using SSDs or NAND Flash SSDs through PCI Express, which connects directly to the processor using NVMB protocol. The goal is to minimize latency and to be able to store and access commands.

Seeing through the fog
While equipping the datacenters is one trajectory, a second one is reducing the amount of data that floods into a datacenter. There is increasing interest to be able to use the network fabric to do at least some of the signal processing, data processing, and DSP processing to extract patterns and information from the data. So rather than pushing all of this data up through the pipe into the cloud, the better option is to refine that data so only a portion needs to be processed in the cloud servers.

This requires looking at the compute equation from a local perspective, and it opens up even more opportunities for chipmakers. Warren Kurisu, director of product management in the embedded division at Mentor, a Siemens Business, said current engagements are focused on working with companies that build solutions for local processing, local data intelligence, and local analytics so that the cloud datacenters are not flooded with reams of data that clog up the pipes.

One of the key areas of focus here involves intelligent gateways for everything from car manufacturing to breakfast cereal and pool chemicals. “It requires multicore processors in the gateway that can enable a lot of the fog processing, a lot of data processing in the gateway,” he said. And that adds yet another element, which is security.

“Security is the number one question, so we have made a very huge focus on being able to create a gateway that leverages hardware security built-in to the chip and the board and establish a complete software chain of trust so that anything that gets run and loaded onto that gateway— any piece of software is authenticated and validated through cryptology—through certificates and other things,” Kurisu said. “But you need some processing power to do just that sort of stuff. There needs to be some sort of hardware security available. One of our key demonstration platforms is on the NXP I.MX6 processor, which has a high-assurance boot feature in it. The high-insurance boot basically has a key burned into the silicon, and we can leverage that key to be able to establish that chain of trust. If there isn’t that hardware mechanism enabled in the system, then we can leverage things like secure elements that might be on the board that would all do the same thing. There would be some sort of crypto element there or a key used to establish the whole chain of trust.”

A change in thinking

The key to success comes down to thinking about these chip designs very holistically, Kurisu added, “because when it comes to cloud in the datacenter, and if you think about Microsoft Azure or Amazon Web services or any of the others, the types of capabilities that are available from the cloud datacenter down to the actual embedded device, these things need to work in tandem. If you have a robot controller, and you need to do a firmware update—and you want to initiate that from the cloud—how that gets enabled on the end device is tied very explicitly into how the operation is invoked from the cloud side. What is your cloud solution? That’s going to drive what the embedded solution is. You’ve got to think of it as a system, and in that way the stuff that happens in the datacenter is very closely related to the things that might seem very disconnected on the edge. But how the IoT strategy is implemented is somewhat tied together so it all has to be considered together.”

It also could have an impact on chip designs within this market, and open doors for some new players that have never even considered tapping into this market in the past.

Related Stories
Chip Advances Play Big Role In Cloud
Semiconductor improvements add up to big savings in power and performance.
Performance Increasingly Tied To I/O
Chipmakers look beyond processor speeds as rate of performance improvements slow.
Conflicting Goals In Data Centers
Open Compute Project conference points to need for speed and extending existing technology.
Sorting Out Next-Gen Memory
A long list of new memory types is hitting the market, but which ones will be successful isn’t clear yet.



1 comments

PMiranda says:

So the big question is… whose ARM server chips are design wins at MSFT?

Leave a Reply


(Note: This name will be displayed publicly)