Power Shift

Cost of development is forcing a big change in where companies initially target their power-saving technology.

popularity

By Ed Sperling
For the past decade, most of the real gains in energy efficiency were developed for chips inside mobile electronics because of the demand for longer battery life. Dark silicon now represents the majority of mobile devices, multiple power islands are commonplace to push many functions into deep sleep, and performance is usually the secondary concern for most applications.

While those advances in energy efficiency were being developed in mobile electronics, the real gains in energy efficiency inside of large corporations came from middleware—a virtualization layer that allowed IT departments to boost their server utilization from as little as 5% to as much as 85%, replacing as many as 17 servers with 1 virtualized server and running multiple applications on virtual machines that ride above the operating system. There is even work underway now to run those virtual machines closer to the metal, improving both speed and efficiency.

The chip technology in most of those server racks has never seen the kinds of efficiency gains that SoCs in the mobile space have witnessed. That’s beginning to change, however, reversing more than a decade-long trend of where IC technology innovation actually begins. Investment in high-performance chips is on the rise again, largely because a few extra dollars in parts doesn’t have a significant impact on the price of a powerful server.

HP blade server rack.

This is particularly true for stacked die. While the benefits in terms of performance and power have been well documented—Wide I/O connections to memory use less power because the pipes are larger and the distance that signals have to travel is shorter—the cost of getting this packaging design correct will be too expensive initially for many consumer devices.

That explains why the Hybrid Memory Cube (HMC) consortium is targeting the networking and test-and-measurement markets. Case in point: Open-Silicon’s controller IP for HMC is aimed at the high-speed computing market, particularly in the 40nm and 28nm process nodes.

“Cost will be an issue in the beginning,” said Steve Erickson, vice president of IP and platforms at Open-Silicon. “That will come down over time until it’s on par with other technology. But we’re going to see this initially in vertical segments where bandwidth, power and size are an issue and cost is less of a concern. The first area is the server area.”

Hybrid memory cube. Source: Intel.

EDA ramps up
That also helps explain why in the past year all of the Big Three EDA vendors have introduced tools or IP for the data center.

Mentor Graphics has repurposed some of its mechanical cooling tools to focus on optimal cooling design of data centers. Given that reducing the temperature in server racks to acceptable limits with proper air flow are critical to a data center’s bottom line—to the tune of seven to eight figures per year in large data centers—this kind of technology is getting increasing attention outside of Mentor’s typical markets, which range from the IC to the PCB.

The company also has been investing heavily in embedded software development tools, including image and signal processing libraries targeted at high-performance computing applications. Its VSIPL++ allows even big-iron companies to fine-tune applications that normally consume far more energy than is necessary.

Mentor is in good company. Synopsys’ Hybrid Prototyping Solution also has applications well beyond the ASIC or SoC. Nithya Ruff, director of product marketing for Virtualizer solutions at Synopsys, said the company’s latest release provides visibility into hypervisor software. Inside of the data center, hypervisor software has become critical in a couple of areas—within multicore chips and between chips using a virtualization layer.

“The goal is to shed more visibility on the software, as well as how the workload is switching across cores,” said Ruff. “The final straw in this process is the amount of software and validation that’s required. We’re seeing more and more custom processors and servers being created. We’re also seeing a little interest from the networking as well as the cloud space, and from the automotive and the industrial companies. A lot of this is about device to device communication.”

The last of the Big Three companies has staked its claim in the corporate enterprise most recently on NVM Express. Cadence introduced the industry’s first NVM Express subsystem last month, which is built on PCI Express. The interesting part of this move is that it leverages one of the most widely used standards in corporate computing, both for high-performance computing and for storage. While most of the virtualization work has been done on server side—shrinking the number of servers inside of data centers by at least an order of magnitude—the next big challenge is on the storage side.

Since the turn of the century, the number of images, videos and other large files has ballooned, causing the same kind of explosion in storage that servers experienced between 1990 and 2005. Achieving the same kind of consolidation in storage is expected to achieve similar savings as server consolidation through virtualization on the processing side.

Performance plus efficiency
EDA brings another side to this equation, as well. While the focus in portable computers and mobile electronics has been on performance and power, the focus on the server side has been predominantly performance. The savings from virtualization and consolidation, as well as from the return to water rather than forced-air cooling in the largest servers, has made a huge impact on energy costs. But there is still a huge gain to be made by designing processors differently from the start, said Aveek Sarkar, vice president of product engineering and customer support at Apache Design.

“The next step is designing chips to be low power from the beginning,” Sarkar said. “If you look at the Xeon chip, for example, that was designed with performance in mind. The next step will be to redesign them with power in mind. To do that we will need to accurately predict noise, which also will be essential for chip-to-chip communication. We need to do chip-package co-design from the power point of view. You need to be able to make tradeoffs such as, do you increase the clock speed or increase the throughput?”

Those kinds of considerations are particularly important in stacked die, where Wide I/O can lower power and improve throughput. That provides a couple extra knobs to turn for power and performance. Lowering the clock frequency in a stacked die can still achieve a modest increase in performance because the distance between logic and memory is shorter and the channels for signals are wider, but the real gain is in energy efficiency because it takes less power to drive signals.

That’s particularly important in the corporate world, because some of the largest enterprise-level applications are multithreaded and can be parsed to run on symmetric multiprocessing servers or across multiple servers simultaneously. The same is not true for most mobile applications. And while throughput is essential for all of them, there is a limit to how many homogeneous cores can be utilized in a mobile device.

Hardware-software co-design will continue to be important in both the corporate and the mobile worlds, and ironically it may be something of a bridge between the two—along with EDA, ESL and some of the most advanced tooling for chipmakers.

“At 20nm, there are more dependencies and more effects from temperature,” said Ghislain Kaiser, CEO of Docea Power. “It’s not just about complexity. It’s also more side effects. You need to do exploration in an acceptable amount of time, but you also need to make changes in an acceptable amount of time. In a floorplan you may need to rotate IP to improve performance and then analyze the effects. We’re seeing the first examples of thermal runaway, which is not easy to control. You need to take into account software and hardware, not just the hardware.”

This is a relatively new concern in the data center, where hardware and software have always been separate worlds. But it’s an important and potentially lucrative one, both for the corporate data centers and for the companies selling into that market. And unlike the past couple of decades, changes made in the data center using new and initially more expensive approaches in design, manufacturing and packaging will have a big impact in all markets—a trend that hasn’t been seen since inexpensive PC servers began filling data centers more than two decades ago.



Leave a Reply


(Note: This name will be displayed publicly)