Since 1965, Moore's Law has served as a benchmark for the computer hardware industry, pushing vendors to double the processing power of computing equipment every 18 months or so. It turns out that computers have doubled in energy efficiency at about the same rate, according to a new study co-authored by data-center-power guru Dr. Jonathan Koomey.
The implications of the study, titled "Implications of Historical Trends in the Electrical Efficiency of Computing," are worth noting: It means we'll steadily continue to see mobile devices -- including sensors and controls -- become smaller while requiring fewer watts to pull off greater computational feats. Higher energy efficiency also means lower power bills and, ideally, a widespread reduction in carbon emissions.
On a related note, anyone who has followed green IT trends over the past couples of years is likely familiar with Koomey's previous study about data center power consumption [PDF]. The co-authors for this study included Microsoft senior program manager Stephen Berard, Carnegie Mellon University doctoral candidate Marla Sanchez, and Intel senior staff platform technologist Henry Wong.
The team examined the energy efficiency of computing devices dating back as far as the 1940s, when the Electronic Numerical Integrator and Computer (ENIAC) was first revealed. Energy efficiency, in this context, refers to computations per kWh, the IT equivalent to MPH (miles per hour).
The authors acknowledge that meaningfully measuring and comparing energy efficiency among different types of computing hardware is difficult, if not controversial. A single server alone, for example, might yield more computations per kWh running one type of workload than another. It becomes more complex when you're trying to compare the performance per watt of, say, a behemoth Cray supercomputer from the 1980s that of a 2009 Mac AirBook.
Drawing on previously published historical data on computing performance and combining it with measured data on power use of each machine at full load, Koomey's team came up with an interesting graph, showing that since the birth of the ENIAC, computing efficiency has steadily doubled every 1.57 years.
Plenty of factors have contributed to this increase in performance per watt, just as plenty of factors have helped Moore's Law remain relevant all these years. For example, vendors have continually built more efficient power supplies and other components, such as smaller transistors that use less power.
Whether performance per CPU can grow at the historical pace isn't clear, the study notes, "but near-term improvements, such as 3D transistors, are already in the pipeline. At this juncture, continuing the historical trends in performance (or surpassing them) is dependent on significant new innovation comparable in scale to the shift from single core to multicore computing."
That sort of innovation will require changes in software design, according to the study, a point observers have raised recently in the context of green computing. The idea is (among other things) to write code that's more efficient and that takes better advantage of underlying power-management features within hardware.
Ultimately, the authors said that "achieving faster rates of improvement is within our grasp if we make efficiency a priority and focus our efforts on a holistic compute system approach, constantly revisiting the notion of what Amory Lovins of Rocky Mountain Institute calls 'clean slate, whole system redesign'"; that is, building more efficient IT devices and systems from scratch while ignoring historical constraints that are not longer relevant (if they ever were).
The new research was published in the July-September issue of IEEE Annals of the History of Computing.
This article, "Koomey's Law: Computing efficiency keeps pace with Moore's Law," was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest business technology news, follow InfoWorld.com on Twitter.