U.S. data centers are using more electricity than they need. It takes 34 power plants, each capable of generating 500 megawatts of electricity, to power all of the data centers in operation today. By 2020, the nation will need another 17 similarly sized power plants to meet projected data center energy demands as economic activity becomes increasingly digital.
Any increase in the use of fossil fuels to generate electricity will result in an increase in carbon emissions. But added pollution isn't an inevitability, according to a new report on data center energy efficiency from the National Resources Defense Council (NRDC), an environmental action organization.
[ Cut to the key news for technology development and IT management with the InfoWorld Daily newsletter, our summary of the top tech happenings. ]
Nationwide, data centers in total used 91 billion kilowatt-hours of electrical energy in 2013, and they will be using 139 billion kilowatt-hours by 2020 -- a 53 percent increase.
This chart shows the estimated power usage (in billions of kilowatt-hours), and the cost of power used, by U.S. data centers in 2013 and 2020, and the number of power plants needed to support the demand. The last column shows carbon dioxide (CO 2) emissions in millions of metric tons. (Source: NRDC)
The report argues that an improvement in energy efficiency practices by data centers could cut energy waste by at least 40 percent. The problems hindering efficiency include "comatose" servers, also known as ghost servers, which use power but don't run any workloads; overprovisioned IT resources; lack of virtualization; and procurement models that don't address energy efficiency. The typical computer server operates at no more than 12 percent to 18 percent of capacity, and as many as 30 percent of servers are comatose, the report states.
The paper tallies up the consequences of inattention to data center energy efficiency on a national scale. It was assembled and reviewed with help from several organizations, including Microsoft, Google, Dell, Intel, The Green Grid, Uptime Institute and Facebook -- all of which made "technical and substantial contributions."
The NRDC makes a sharp distinction between large data centers run by large cloud providers, which account for about 5 percent of all data center energy usage, and smaller, less-efficient facilities. Throughout the industry, there are "numerous shining examples of ultra-efficient data centers," the study notes. These aren't the problem. It's the thousands of other mainstream business and government data centers, and small, corporate or multi-tenant operations, that are the problem, the paper argues.
The efficiency accomplishments of the big cloud providers "could lead to the perception that the problem is largely solved," said Pierre Delforge, director of the NRDC's high-tech sector on energy efficiency, but that perception doesn't match reality when all data centers are taken into account.
Data centers are "one of the few large industrial electricity uses which are growing," Delforge said, and they are a key factor in creating demand for new power plants in some parts of the country.
Businesses that move to colocation, multitenant data center facilities don't necessarily make efficiency gains. Customers of such facilities may be charged according to a space-based pricing model, paying by the rack or by square footage, with a limit on how much power they can use before additional charges kick in. That model offers little incentive to operate equipment as efficiently as possible.