Overprovisioning and overallocation often leads to overspending in the datacenter. There's certainly something to be said (such as, "I don't want to lose my job") for ensuring that your facility has sufficient power, computing hardware, and backup equipment to maintain precious uptime. However, the trade-off can be thousands -- if not millions -- of dollars wasted on excess gear that eats up precious white space and costly watts of electricity.
Datacenter operators are tackling the problem is numerous ways, such as turning down or eliminating CRAC units, hunting down zombie servers, and employing virtualization to reduce machine count. Some are taking their efforts a step further, employing an emerging technology called power capping that boosts server density and saves on space and power.
As the name implies, power capping refers to the practice of limiting how much electricity a server can consume. Typically, the power allocated to a server is steady and fixed, based on a worst-case scenario: how much power the server needs when running at maximum utilization. In reality, most servers in the datacenter don't come close to reaching maximum utilization. That means that most datacenter operators are setting arbitrarily low limits on how many servers they can deploy.
Stuffing the power envelope
Let's say you have a max power envelope of 1MW. For the sake of argument, let's say 400,000 watts of that megawatt goes to power, cooling, storage, and networking equipment, which leaves 600,000 watts to allocate to your servers. You decide to stick to the power allocation printed on the nameplates of your machines, which is 400W. That means that your budget allows 1,500 1U servers in your datacenter.
But what if, in reality, your servers never need more than an average 300 watts of power to maintain their required performance level? If there was a way to ensure you didn't exceed your 1MW power limit, you could pack 2,000 1U servers into the same amount of space -- with little to no need to add power and cooling infrastructure.
That's where the power capping comes in. With power capping and complementary management software, you could ensure that no server draws more than 300 watts at once. Some companies, such as Intel, have developed power capping technology that can be applied at the rack level.