Credit: Zoonar RF
One of the eternal concerns for any data center is cooling the mass of metal that makes a room a data center. Over the years we've seen a significant decrease in the power consumption and heat generation of a general-purpose server, as well as an order-of-magnitude decrease in those variables from a per-server-instance standpoint. Where once stood 2U servers with several 3.5-inch disks running a single server instance, you'll now find two 1U servers with 2.5-inch drives or no disk at all, running maybe 30 server instances.
But the fact remains that we're also seeing a proliferation of logical server instances. A server-by-server comparison of a data center five years ago and the same infrastructure today should show a decrease in the number of physical servers, but that's not a guarantee.
In the meantime, costs for power and cooling have not been stagnant. Power consumption is still and likely always will be a source of pain in the data center budget. I can recall a time in the engineering labs at Compaq where the monthly power bills for the data center-sized labs would run into the hundreds of thousands of dollars, and that was many moons ago.
Data center power draw comes from two main sources: the hardware (servers, storage, and networking) and the cooling systems. The larger and hotter the metal, the more power needed to cool it, to exhaust the hot air, and to maintain suitable humidity. There are many ways to combat the laws of physics and maintain reasonable intake temperatures. There are water-cooled racks and in-row cooling units that serve to bring the cold air where it's most necessary -- at the server inlet.
These methods aren't for general use, however. They generally require careful hot- and cold-aisle designs, and while they can reduce overall power and heating bills, they can also cost more initially. Perhaps surprisingly, these designs are also very effective in smaller builds, where a prospective eight racks can be cooled by only two units.