Different strategies for writing software and utilizing cores could have a big effect on system design.
By Ed Sperling
The decades-old approach of powerful processors with ever-faster clock speeds is changing. Performance matters in some settings, but the real concern is adding more functionality within power budgets.
The most pressing tradeoff is now performance vs. power, which has forced processor architects at AMD, Intel and IBM to take into account everything from application software to the firmware that manages some of the functions on a chip and the middleware that makes it all work together.
“One phenomenon we’re seeing is that a number of customers claim their data centers are full but when we go out to see them they’re only half full of hardware,” said Margaret Lewis, product marketing director at AMD, “They can’t draw any more power in places like the Northeastern United States, California or Germany.”
Part of that is due to virtualization, which has been pushed on data centers in particular as the way to boost utilization of a server. According to McKinsey & Co., datacenter server utilization is as low as 5%, which has made virtualization a natural way to improve efficiency and cut costs. And with many software applications unable to utilize more than a couple cores of a server, it’s sometimes only way to boost utilization of multicore servers.
That is about to change, however. “Most of the software hasn’t made it over to multithreading,” Lewis said. “So instead of just using cores for applications, there are other switches we can turn on processors to do things like balance memory or have better I/O.”
The software also can be tweaked to boost optimization lower down on the stack so that instead of tuning each Java virtual machine running on a separate core they all can be optimized so that every Java applet benefits.
“We are seeing a number of new software models,” Lewis said. “The only thing that keeps everything around is that the legacy software people don’t want to give up what they have. It’s easy to multithread to two to four cores. After that, debugging becomes too difficult. A different approach is multitasking, so you do different tasks on different cores. What’s being done with the CPU and the GPU is the first big example of that.”
Intel, meanwhile, has been working with Microsoft to improve the efficiency of its processors.
“Performance was always the focus, but power savings are now part of the methodology,” said George Alfs, program manager at Intel. “For years we have been working with Microsoft to make sure that the operating system isn’t spinning wildly waiting for the next keystroke. We’re now putting the operating system into a sleep state even between keystrokes. There are seven sleep states and a variety of ways to take advantage of power.”
Part of Intel’s road map also calls for more threading. Windows 7 is expected to offer better scheduling than Vista, allowing more than one application to run at the same time on different cores. It also calls for power flexibility to provide more thermal headroom for either boosting performance or lowering power at 32nm.
Intel also is building basic graphics processing into the processor, which will further utilize some of the cores. How many cores depends on the graphics requirements. The first Larrabee chip, due out next year, has a discrete graphics card for ray tracing, but there is certainly a possibility that Intel could integrate some of those graphics into its processors.
Intel also will be using a combination of homogeneous and heterogeneous cores, Alfs said, which is a different direction than the company said it would take several years ago. Some of those cores could be for I/O and graphics, Alfs said, similar to the approach taken by AMD. Intel also plans to use some cores for encryption/decryption, which has been a drag on system performance in the past.
Leave a Reply