Chip advances leave developers in the dust

Culprit No. 1: Capitalizing on multicore requires a new mode of development

Ever since Gordon Moore's 1965 proclamation in Electronics Magazine, we've come to expect processing power to double every two years. So why don't the latest CPUs seem significantly faster than those made even five years ago? Has the lease on Moore's Law run out? Or has the science of silicon left software developers in the lurch?

Part of the answer lies in a popular misconception about what Moore's Law actually says. Moore never predicted that chips would steadily gain gigahertz, only that each new generation would be capable of holding more transistors than the last. He was talking about increases in complexity, not clock speed.

Current CPUs are certainly more complex than their predecessors, but chip designers are butting heads with the laws of physics when they try to push the envelope of overall speed. More cycles per second mean more heat. Too much heat and the circuits start to break down. As a consequence, electrical engineers have had to tweak chip designs to improve the processing power of CPUs running at modest clock speeds -- in short, to create chips that do more with less.

The results are marvels of electronics, to be sure. But the new chip designs, with all their performance-enhancing tricks, are unfamiliar territory for many software developers. As Bjarne Stroustrup, designer of the C++ programming language, explained in a recent public talk, "If you look at machine architectures, especially the really advanced stuff, you'll see I'm not exaggerating when I say it's weird."

When it comes to dual-core chips, that weirdness doubles. Etching two or more complete CPUs onto a single piece of silicon offers great potential for performance increases -- at least in theory. But to take advantage of these advances, software must be designed with multiprocessing in mind. Applications written by coders who have not been trained in parallel processing, for example, will see little benefit. What's more, developers need new tools such as those proposed by AMD to properly adapt their software to multicore chips. Unfortunately, very little of the software we use today measures well by these standards, including the leading operating systems.

Computer scientists predict that quantum computing may one day render today's silicon CPUs obsolete. Even if that happens, however, the software development downside will remain the same. Until the developer community can fully tap the latest hardware advances, much of the promise of these new technologies will go unrealized.

[ Slow software index | Culprit No. 2: Code bloat abounds ]

Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Join the discussion
Be the first to comment on this article. Our Commenting Policies