2. Solid-state drives
Solid-state storage devices -- both RAM-based and NAND (Not And) flash-based -- have held promise as worthwhile alternatives to conventional disk drives for some time despite the healthy dose of skepticism they inspire. By no means new, their integration into IT will only happen when the technologies fulfill their potential and go mainstream.
Volatility and cost have been the Achilles' heel of external RAM-based devices for the past decade. Most come equipped with standard DIMMs, batteries, and possibly hard drives, all connected to a SCSI bus. And the more advanced models can run without power long enough to move data residing on the RAM to the internal disks, ensuring nothing is lost. Extremely expensive, the devices promise speed advantages that, until recently, were losing ground to faster SCSI and SAS drives. Recent advances, however, suggest RAM-based storage devices may pay off eventually.
As for flash-based solid-state devices, early problems -- such as slow write speeds and a finite number of writes per sector -- persist. Advances in flash technology, though, have reduced these negatives. NAND-based devices are now being introduced in sizes that make them feasible for use in high-end laptops and, presumably, servers. Samsung's latest offerings include 32GB and 64GB SSD (solid-state disk) drives with IDE and SATA interfaces. At $1,800 for the 32GB version, they're certainly not cheap, but as volume increases, pricing will come down. These drives aren't nearly the speed demons their RAM-based counterparts are, but their read latency is significantly faster than that of standard hard drives.
The state of the solid-state art may not be ready for widespread enterprise adoption yet, but it's certainly closer than skeptics think.
-- Paul Venezia
3. Autonomic computing
A datacenter with a mind of its own -- or more accurately, a brain stem of its own that would regulate the datacenter equivalents of heart rate, body temperature, and so on. That's the wacky notion IBM proposed when it unveiled its autonomic computing initiative in 2001.
Of the initiative's four pillars, which included self-configuration, self-optimization, and self-protection, it was self-healing -- the idea that hardware or software could detect problems and fix itself -- that created the most buzz. The idea was that IBM would sprinkle autonomic-computing fairy dust on a host of products, which would then work together to reduce maintenance costs and optimize datacenter utilization without human intervention.
Ask IBM today, and it will hotly deny that autonomic computing is dead. Instead it will point to this product enhancement (DB2, WebSphere, Tivoli) or that standard (Web Services Distributed Management, IT Service Management). But look closely, and you'll note that products such as IBM's Log and Trace Analyzer have been grandfathered in. How autonomic is that?
The fact is that virtualization has stolen much of the initiative's value-prop thunder: namely, resource optimization and efficient virtual server management. True, that still involves humans. But would any enterprise really want a datacenter with reptilian rule over itself?
-- Eric Knorr