In terms of power efficiency, this is problematic, even if the new chips don't consume more power than the old ones, Bernard said. Swapping out old processors for new ones may get the application to run faster, but the application takes up correspondingly less of the more powerful CPU's resources. Meanwhile, the unused cores idle, still consuming a large amount of power. This means more capacity is wasted, unless more applications are folded onto fewer servers.
"As soon as you replace your hardware with something more efficient, your CPU usage, by definition, will go down," Bernard said.
Speakers at the conference estimated that the average CPU utilization (which is the number of processor cycles that are actually tasked with doing something) hovered somewhere between 5 percent and 25 percent. Despite virtualization efforts, the percentage seems to be going down as time passes.
Organizations are not thinking enough about how to consolidate workloads, Bernard charged. Each new application added by an organization tends to get its own silo, and very little work is done in sharing resources.
Bernard used Microsoft as an example. He noted that while Microsoft online services such as Hotmail and Bing have really high CPU utilization rates, the company also has many other projects, both internal and external, that use only a small portion of the capability of the servers devoted to them. For each new project, a manager may provision too many servers for the task. And when the hardware is upgraded, the CPU utilization rate goes down even further.
Bernard said Microsoft, like many large organizations, has "hundreds and hundreds of small applications that aren't mission-critical, but they need to be serviced, and they all overprovision and have massive headroom."
The server makers and other component manufacturers have gone a long way toward building power saving into their equipment. However, again thanks to the low CPU utilization and ingrained organizational habits, power savings have proved to be minimal.
John Stanley, an analyst at the research firm The 451 Group, which purchased the Uptime Institute last year, surveyed power usage across industry members of Uptime. In a panel discussion, he previewed some of his early findings.
He had found that fluctuations in server traffic do not correspond with fluctuations in the amount of power that servers, as a group, draw from the power supply. "Even though you may have big variations with [different] boxes, overall, the variation in the average is very small," he said. Stanley plans to publish his findings in a research note later this month.
Servers may have power-savings features, but given how the workloads are spread out across the servers, such features don't seem to do much good in reducing the overall amount of energy consumed.
Even when it idles, a server can use hundreds of watts, though few users want to turn the servers off, given the time it would take to get them running again, Andrew Fanara said in the same panel discussion. Fanara is the former Energy Star manager for data center specifications and is currently with infrastructure-management software provider OSISoft.
What is needed is a more dynamic way for the data center to scale its power usage with the amount of work that needs to be done, speakers said. "As an industry, what we'd truly like to see is truly linear scaling where you'd use zero watts when doing zero work to drawing a lot of power [only] when you are doing more work," Stanley said.
This idea was echoed by eBay's data center chief, Dean Nelson, during his talk.