Intel's power capping magic is called Intel Dynamic Power Node Manager Technology. Designed for servers running Intel's Xeon 5500 chips, Node Manager is an out-of-band power management policy engine, embedded in the Xeon's chip set, that works with BIOS and OS power management (OSPM) to dynamically adjust platform power to achieve maximum performance per watt at the server level.
Among its features is Dynamic Power Monitoring, which measures actual power consumption of a server platform, providing real-time power-consumption data. The Platform Power Capping feature sets platform power to a targeted power budget while maintaining maximum performance for the given power level. The Power Threshold Alerting feature monitors platform power against a targeted power budget. When the target power budget cannot be maintained, Node Manager sends out alerts to the management console.
Intel also has developed a software add-on to Node Manager called Intel Datacenter Manager, designed to monitor and control power for a group of servers. Intel Datacenter Manager depends on Intel Dynamic Power Node Manager. Datacenter Manager features include group-level monitoring of power consumption, log querying for trend data, group power limiting, and group-level power alerts and notifications.
Baidu racks up savings
Baidu, China's largest search company, reports success using Intel's power-capping technology. Based on a proof-of-concept study of Baidu's application of the technology, the companies report that a datacenter using the technology could save up to 40 watts per system -- without performance impact. This translates into as much as 20 percent additional datacenter capacity within the same rack-level power envelope, and a potential rack-density improvement of 20 to 40 percent.
Baidu's predicament before deploying power capping was pretty typical: It was leasing racks at a datacenter, and each rack was power limited. The company sought to save money by cramming as many machines as possible into the fewest number of racks.
Testing Intel's power-capping wares started at the individual node level. Step one was to measure power consumption and performance and various levels of CPU use to identify the sweet spot for power management -- that is, where the server achieved the maximum power reduction with the minimum performance loss. The testing revealed that the optimal workload was reached at a CPU utilization of around 50 to 60 percent with peak power at about 300W per server. Power consumption tended to stick at around 290W, with some spikes to 300.
The next step was to test two levels of power capping: 260W and 200W. The minimum 40W power reduction was needed in order to add another server to each 5U rack, thus achieving the goal of increasing server density. The cap could not go below 200W, as that was the approximate amount of power the server needed simply to idle.