Cloud 2.0

Changes are under development to radically improve efficiency in the data center, with semiconductors and software at the center of this shift.

popularity

Corporate data centers are reluctant adopters of new technology. There is too much at stake to make quick changes, which accounts for a number of failed semiconductor startups over the past decade with better ideas for more efficient processors, not to mention rapid consolidation in other areas. But as the amount of data increases, and the cost of processing that data decreases at a slower rate than the volume increases, the whole market has begun searching for new approaches.

This is a new wrinkle in . The last big shift to occur in the data center was the widespread adoption of virtualization in the early part of the millennium, allowing companies to increase their server utilization rates and thereby save money on the cost of powering and cooling server racks. That allowed data centers to turn off entire server racks when not required, essentially applying the dark silicon concept on a macro level.

The next change involves adding far more granularity into the data center architecture. That means far more intelligent scheduling—both from a time and distance perspective—and partitioning jobs in software to improve efficiency. And it means new architectures everywhere, from the chip to the software stack to the servers themselves, as well as entirely new concepts for what constitutes a data center.

“What’s motivating these changes is the cost of power in the cloud,” said Steven Woo, vice president of enterprise solutions technology at Rambus. “It’s now painful to move data back and forth. Disk and networking performance are not as high as the flops and number of instructions per second of the CPUs. So do you just network machines together? Or do you think about newer architectures?”

Woo noted that sending a virtual machine off to a server on the other side of a data center doesn’t always have predictable power/performance tradeoffs. It often depends on what else is running on the server, what’s moving through the network, and sometimes it simply may take more time to move data back and forth between servers.

This has been one of the concerns about Hadoop, which is an open-source framework for distributing data across storage and computers. More current approaches include Docker, which isolates data and allows faster ramp-up of an application, and Spark, which keeps data more localized instead of shuttling it back and forth to any available server. This is very similar to the argument for adding more processors scattered around a die instead of using a central processing unit, and it plays into some of the new architectures being developed.

“Saving power in the data center is not just about how you save power for the processor,” said Bernard Murphy, CTO at Atrenta. “It’s also about how you save power across the data center. If your servers are fully loaded, they consume less power. And if you can add heterogeneity, combining say an HP server with a medium-performance server, then you also can start addressing thermal issues to reduce the cooling costs.”

Software changes
One of the most significant shifts involving data processing is happening on the software side. Software is a hierarchical stack, starting at the OS, RTOS and embedded software level, and progressing upward to include middleware such as virtualization and networking, and finally up to the application layer. It’s hard to make generalizations about the entire stack, because the pieces are so different and the number of possible combinations is almost infinite. But it is easy to measure the battery life of a mobile device under normal use and to figure out what needs to be improved if people are ever going to buy end products.

In the past, an application or utility that ran on a smartphone or a home set-top box had little to do with applications or operating systems running on a server in a data center, but increasingly these worlds are being drawn together with the IoT as the glue. Small devices need to draw upon the processing power and storage of the data center, or in some cases edge-of-the-network servers, while doing other processing locally. Cars need to download updates. And all of the software in these devices needs to work together more efficiently to maximize battery life on a device and minimize power costs inside a data center.

“You can’t just strap on a bigger battery,” said Andrew Caples, senior product manager for the Nucleus product line at Mentor Graphics. “You need to be able to utilize individual cores with additional computing and processing when necessary, and gracefully take them down when they’re not needed.”

Caples pointed to techniques such as running a combination of processes at the same time on the same core, and improving space-domain efficiency through better partitioning and scheduling, as ways of reducing the power needed to process data. “What this does is make it look like you have more resources than you actually do. You can free up memory to run different applications, and then you can load different applications and determine what’s next in line. This is a stretch from what is possible today. It allows you to compartmentalize better.”

Space-domain partitioning is starting to be used for safety-critical applications, where ARM‘s TrustZone is being used to house safety applications as a way of keeping them isolated, he noted.

Hardware changes
In lock step with the changes in software are changes in hardware. This is particularly true for companies such as Google and Facebook, where types of data being processed are very specific and can be optimized using different approaches in hardware. That includes pipelining with specific memory configurations and widened buses. Google has developed what it calls warehouse-scale computers. Facebook, meanwhile, has created what it calls a data center fabric, the goal of which is to turn the data center into a high-performance network rather than limiting computing to clusters.

But even more general-purpose, heterogeneous multicore server chips developed by ARM and, presumably, by Intel given the forthcoming acquisition of Altera, are aimed at more granular processing approaches. ARM has been pushing heavily into the data center by aligning with companies such as HP, with its Moonshot, which can be configured for different workloads. It has won deals at PayPal and Online.net Labs, a French cloud provider.

“In the past, if you looked at physical IP, the goal was to run the ARM core faster, faster and faster,” said Wolfgang Helfricht, senior product marketing manager for the Physical IP Division at ARM. “Now it’s much more about what are the options to lower the frequency.

Others agree. “These are low-power architectures tuned for power versus performance,” said Arvind Shanmugvel, senior director of applications engineering at Ansys. “It’s the same concept people have been using for mobile processing for awhile, where you have high-performance cores and low-performance cores. And every design these days employs dark silicon where it’s dormant 90% of the time.”

Design changes
Perhaps the biggest driver of change on the semiconductor side, though, is the convergence of multiple markets that never directly interacted in the past. The consolidation of the IP market into a handful of large players—ARM, Synopsys, Cadence, Mentor Graphics, Imagination Technologies, Kilopass —along with the difficulty in developing process technology at the most advanced nodes, has brought together EDA and IP vendors, foundries and chipmakers (including systems vendors that make their own chips, such as Apple and Samsung).

This is a new level of collaboration, driven largely by complexity and time-to-market demands, and it is redefining how chips are developed from the architecture down to physical implementation, verification, manufacturing and test. There is much riding on this collaboration, whether it’s for mobile chips or those developed for data centers, and one of the outcomes is a rethinking of what really constitutes a data center or cloud.

“If you map the data flow, and run analytics from storage to SAN (storage area network) to DRAM to server tracking the physical path of ones and zeroes, you can find 100X room for improvement,” said Srikanth Jadcherla, low power verification architect at Synopsys. “And new architectures can add an order of magnitude improvement beyond that. This is why some of the new architectures being developed now will survive. But it’s not just the data centers that we have today. There also will be mobile big data in a car or an emergency vehicle, which will be their own IoT command centers. You will be able to run diagnostics from an emergency vehicle. If they’re doing medical intervention in a place where you don’t get a signal now, they can rely on a portable medical system.”

All components inside those servers, portable or not, will have to be rethought, as well. This offers an interesting twist on the design side, because the tools will only identify actionable items. After that, it’s up to the design teams to think about what they’re trying to accomplish.

“You may have gotten the concept, but there are other issues—area, performance, design, creativity—that you may want to apply in a different way,” said Mark Milligan, vice president of marketing at Calypto. The distinction here is microarchitectures, rather than architectures, all of which can be combined in unique ways to meet specific needs. This is exactly what is being pushed in the cloud to save power, and it’s catching the eye of an increasing number of data centers because of the low cost to buy and maintain these servers. They’re not going to replace mainframes or powerful Xeon, POWER or Sparc processors, but they will likely supplement them at an increasing rate.

“What’s different now is that everyone is looking for better power analysis up front,” said Krishna Balachandran, director of low power solutions marketing at Cadence. “Everyone wants a more accurate estimate. In the past, they were just designing for performance. Now you’ve got volumes of data, your cooling costs are high, and you’ve got mandates on specs to design within a certain power budget. This is a big change. Five years ago we would tell our customers we had a low-power solution, they would listen, and then nothing would happen. Today we’re getting calls asking about a low-power solution.”



Leave a Reply


(Note: This name will be displayed publicly)