Knowledge Center
Navigation
Knowledge Center

Processor Utilization

A measurement of the amount of time processor core(s) are actively in use.
popularity

Description

Processor utilization is a measurement of the amount of time a processor, or a number of processor cores, are in use. The terms began taking on new significance in the early part of the century inside of data centers, when the cost of energy to power and cool densely packed racks of servers began creeping up—sometimes to the point where it was costing millions of dollars a year.

This was due to two factors. First, by keeping pace with Moore’s Law, server processor makers such as IBM, Intel and AMD had packed so many transistors onto a piece of silicon that the amount of heat generated was enormous. Servers have maximum operating temperatures, so they need to be cooled when they exceed that temperature. The more transistors, the more energy required to keep the servers running and the greater the heat, which in turn requires more cool air to be blown into server racks.

The second cause was the proliferation of inexpensive blade servers in the 1990s. Rather than trusting servers to schedule multiple tasks and run multiple applications, which increased the risk of down time across multiple processes in a corporation, a common approach was simply to add more servers. But those servers were almost always on—unlike the dark silicon inside of mobile processors today—and utilization rates of between 5% and 15% were the norm.

In the first decade of the century, a common solution was to implement virtualization, which raises utilization level as high as 85% by scheduling tasks, each of which can utilize that processor—regardless of operating system—using virtual machines and hypervisor technology. More recently, server designs have followed the path of mobile device chips, where multiple cores can be shut down when they’re not in use and powered up when they are needed.


Multimedia

M2M’s Network Impact