2006 technology in the crosshairs

Thrills! Chills! Or just plain baloney? My hardware-centric prognostications for the new year

What would ahead of the curve be without some journalistically irresponsible predictions to kick off the New Year? It’s all part of my contract with you, the reader, which, if you read the fine print, also absolves me of any accountability.

This time around, I’d like to focus on my favorite obsessions: HPC (high-performance computing) and virtualization. I believe that together, these two technologies put us right on the cusp of what I call the HPC datacenter.

To see where we are headed, you have to understand where we are today. Increasingly, we have racks of PCs where monolithic servers once stood, with each machine fulfilling a permanently assigned purpose. That’s not a problem when you’re doling out two-way servers with 4GB of RAM and 72GB of local hard drive space. You can afford to shrug off whatever cycles go to waste, and when a given task needs more resources, you buy another box and stuff it into the rack. It’s cheaper and easier than figuring out how to match every CPU second, kilobyte, and gigabit per second to each task.

We know that the standard 2-way PC server will be dead in late 2006 or early 2007, but think about what will stand in its place: 1U and 2U servers with four times as many 64-bit processor cores, room for 16GB to 64GB of memory, networking that is faster than today’s by at least an order of magnitude, terabyte hard drives that cost what 250GB drives cost today, and integrated, standards-based service processors that stay online even when systems are powered down. You are now leaving the state of Kansas.

What are you going to do with all that capacity? The short answer is “real time” -- but it will take me a year to lay out the long version of that answer. Anyway, take my word, you’ll want to embrace dense computing clusters, and you’ll have two choices of how to manage that capacity: waste it or virtualize it. If you do the latter, every CPU and gigabyte of storage in your enterprise, along with CPUs and gigabytes that you rent or share with partners, becomes part of a pool into which you dip as needs change, not on the quarter year, but the quarter hour.

I realize this area is going to provoke a feeding frenzy for massive consultancies, hauling opaque enterprise management solutions or services through your door. That’s not virtualization, though; that’s capitulation.

But let’s not allow my HPC fixation to obscure the other visions floating in my crystal ball. Try these on for size: Multimedia messaging will overtake plain text. Professionals will require their mobile devices to be able to play back standards-based, rich e-mail and downloadable, packaged, interactive content. Of course, cellular data services will have to keep up, and they’ll finally face some competition from other forms of wireless networks.

We will witness the dawn of the disposable computer, if not literally, then practically. And, due to buyer demand, equipment makers will compete more than ever on the basis of environmental factors such as power consumption, heat, noise, and eliminating the use of hazardous materials.

The lovely thing about technology -- such as HPC handed down from the world of academia and science, or multimedia passed up to you from the consumer level -- is that over time, fewer people with fewer specific skills are needed to deploy and maintain it. Standardization, commoditization, and self-determination are on my short list of watch phrases for the coming year. Stop by next week and I’ll cast some more runes for you.

Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies