Energy efficiency takes center stage in the server processor wars as strategies diverge.
By Barry Pangrle
It’s all about the system. One energy-efficient component doesn’t an energy-efficient system make. There were two big announcements recently made by the industry’s two x86 designers. One was by Intel announcing its new Sandy Bridge Xeon Processor E5-2600 product family, and the other one was by AMD announcing its planned acquisition of SeaMicro.
Both of these announcements emphasized the power savings that these technologies bring to the marketplace, and it looks as if Intel is continuing to extend its technological lead in the high-end server processor market with the new E5-2600. Still, the SeaMicro deal looks like an interesting play to address overall server energy-efficiency by means other than only improvements to the CPU. Both of these announcements also mentioned the importance of the fast-growing server market to address cloud computing.
As we’ve discussed before, choosing the right architecture for a given task is absolutely essential for generating a power- or energy-efficient design. World-renowned computer architect, Seymour Cray, famously quipped “If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens?” Well, if the field changes, 1024 (or in SeaMicro’s case, maybe 768) chickens look pretty good.
In a talk reported last year, Dileep Bhandarkar, distinguished engineer at Microsoft, called for 16-core SoCs based on Intel Atom or AMD Bobcat cores that would also integrate all of the core logic and I/O functions currently placed in separate chips. He presented the chart below to show how components in a system need to be balanced to get the best power, performance and cost outcome. Bhandarkar stated,
“The conclusion was clear—the best balance of performance, power, and price was found in the lower-power processors.”
In a whitepaper published by SeaMicro, SeaMicro states that, “while much of the industry discussion centers around the CPU, the CPU consumes only one-third of the power used by a server. The remaining two-thirds is consumed by hundreds of other components. If one seeks to reduce server power consumption by 75%, one needs to focus on the non-CPU power-drawing components first, and only then on the CPU.”
SeaMicro also realized that the challenge in the data center had turned into a problem of handling huge volumes of relatively modest computational workloads rather than solving a few complex problems (chickens vs. oxen). These workloads in part are generated by the millions of users wanting to perform searches, view Web pages, check e-mail, and read the news. Put simply, large, complex, high-speed, multi-core, multi-socket CPUs are “overkill,” and the mismatch between the CPU and the primary workload in the data center is also a fundamental underlying cause of the data-center power issue.
Pushing data around is expensive from an energy standpoint. Whether you are sending data across a chip, multiple chips or multiple boards, greater parasitic capacitance means greater energy to move that data. Energy-efficient systems are all about the efficient choreography of the movement of data within the system.
SeaMicro addressed the energy-efficiency issue by:
SeaMicro claims that this combination of technologies produces a system that uses one-fourth the power and takes one-sixth the space of the best-in-class competition. The SeaMicro Freedom fabric is also independent of the CPU instruction set architecture (ISA), so that the use of other ISAs, like ARM, is still an open possibility. For applications that do need more compute power, SeaMicro announced the production of new products using more powerful Intel Xeon processors, too. Going forward, the obvious expectation is that SeaMicro will transition to parts designed by its acquiring company, which also opens some interesting possibilities for future APU functionality.
–Barry Pangrle is a solutions architect for low-power design and verification at Mentor Graphics.
Leave a Reply