Intel CEO Paul Otellini unveiled his vision for the future at the chipmaker's annual developer conference in San Francisco this week. Not surprisingly, that vision is one of x86 -- or IA (Intel architecture), as the company prefers to call it -- everywhere. From desktop PCs to servers and even to mobile phones and televisions, if Intel has its way soon every device will be an IA device.
Mind you, let's not kid ourselves. The near-total dominance of IA for desktop computing has been a market reality for some time now. Remember Transmeta? The whole point of its revolutionary "code morphing" technology was to allow its unique, low-power chip designs to execute IA instructions seamlessly -- because any new chip design that couldn't run Windows would be a nonstarter.
[ Meanwhile, Intel is battling the European Commission over antitrust allegations. | Keep up with app dev issues and trends with InfoWorld's Fatal Exception and Strategic Developer blogs. ]
The question is, do we really want Intel to extend its dominance even further, to include phone handsets and other consumer electronics? A world where every CPU speaks Intel's language would be a boon to Intel, certainly. But whether such a processor monoculture would benefit customers is an entirely different matter.
Sort of like Java, only not as good
The crux of Intel's plan is to offer a range of IA chip designs, each tailored for a different category of devices. The promise that developers can "write once, run anywhere" has been a major selling point of the Java platform; Intel's position is that it can deliver on that same promise, only without the middleman. Any IA chip can execute code for the x86 instruction set, just like any JVM on any OS can execute Java bytecode. But unlike the JVM, binaries for the IA platform run on the bare CPU, with no performance degradation and no need for a virtual machine or a JIT (Just-In-Time) compiler.
But a processor monoculture is actually a poor substitute for Java's bytecode-based approach. The Java runtime environment offers significant advantages that simply aren't available on Intel's hardware. For example, the JRE analyzes and validates bytecodes before it executes them, significantly reducing the likelihood of buffer overflows and other potential security vulnerabilities. While Intel has added security features to its newer chip designs, these don't compare to the sandboxed security of the JRE (or Microsoft's managed code environment in .Net).
Moreover, how much does binary compatibility across PCs and consumer electronics devices really get you? Even as smartphones increasingly resemble handheld PCs, it hardly seems likely that developers will be reusing significant amounts of desktop application code in apps for handsets, given the differences in input methods, screen sizes, and so on.
And even if developers want to use the same code on a handset as on a PC, porting code between processor architectures is hardly an unbearable burden. Witness the iPhone SDK; iPhone handsets run on a different processor architecture than Mac OS X desktops do, yet the Cocoa application APIs are consistent across both platforms -- and that's what really matters.
Where have all the processors gone?
Still, a single, industry-wide de facto CPU architecture could have some advantages. Perhaps the most significant benefit would be to allow compiler designers to concentrate their efforts on optimizing code for IA, and Intel's multicore chip designs in particular. Optimizing code for parallel processors remains one of the greatest challenges facing the industry, and Intel has developed some interesting tools in this area.
Standardizing on IA opens the door to other intriguing ideas, as well. For example, Google Native Client allows native x86 binaries to run inside a browser window, eliminating the need for proprietary plug-ins such as Flash. And Google's Courgette project introduces a new way to deliver software patches efficiently by partially disassembling IA binaries.
But do such concepts really make up for the lack of innovation that would inevitably occur in a world where Intel's architecture dominates all of computing? Who's to say that Intel's road map is the only way forward? Consider ARM, for instance. Once confined to low-power devices like phones, calculators, and MP3 players, ARM is slowly breaking out of its niche, and it's expected to debut in netbooks later this year. With a 2GHz version of the chip on the way, ARM may be able to give Intel's Atom line a run for its money.
I think we need more of that kind of competition in the chip business, not less. Intel's massive success is what has allowed it to invest so heavily in R&D, but that success owes as much to Intel's collusion with Microsoft in the Wintel duopoly than it does to actual innovation. In short, the software industry doesn't need an industry-wide CPU architecture monopoly. As such, the burden is still on Intel to tell us why we should want one -- and I haven't heard a good answer yet.