Like MPG for car buyers, server energy efficiency is becoming an increasingly important selling point for datacenter operators, who are facing soaring power bills, shrinking electricity supplies, and in some cases, the need to reduce CO2 emissions. As evidenced by a recent round of lab tests performed by independent analyst Neal Nelson and Associates, the differences in energy efficiency among servers can be striking, potentially varying significantly based on CPU, memory, workload, and other factors.
Nelson released the results of a lab test this week in which he pitted AMD's low-power 45nm quad-core Opteron Shanghai HE processor (model 2376) against Intel's low-power 45nm quad-core Xeon processor (model L5420). Nelson didn't just measure the overall raw performance (throughput) of the chips; he also assessed their energy efficiency. In other words, he determined which CPU delivered the highest performance per watt.
(Lest Nelson be accused of a bias against Intel, I'll add, as previously noted by my colleague Tom Yager, that Nelson is respected for his objectivity and professionalism in the IT world.)
[ Keep abreast of green IT news by subscribing to InfoWorld's free weekly Green Tech newsletter. ]
Cutting to the chase, the Opteron server tended to deliver marginally better performance (measured in transactions per minute) at both lower and higher numbers of simulated users (which ranged from 100 to 500 in 50-user increments ). However, when it came to power efficiency, the Opteron was the decisive victor across the board.
Making the test bed
Nelson's test bed comprised two virtually identical servers, one configured with a pair of the low-power, 45nm quad-core Opteron CPUs and the other with a pair of Xeons. There were a couple of differences, however. First, the AMD server's CPU had a clock speed of 2.31GHz, whereas the Xeon's had a slightly higher clock speed of 2.5GHz. Second, whereas both servers used DDR2 (Dual Data Rate 2) memory modules, the Intel server used FB-DIMMS (Fully Buffered Memory Modules) and the AMD server did not. (According to Nelson, Intel requires that all current generation Xeon servers use FB-DIMMs).
Both servers were configured with identical software and system components and set to run Web-based transactions against a MySQL database. Transactions were fed to the servers from a cluster of 32 Linux-based computers that were executing Remote Terminal Emulation (RTE) software. Millions of transactions were fed to each system, during which Nelson measured both throughput and power consumption.