As we continue to move wholesale into a world where virtual servers are the rule, we're starting to see just how different this new environment is. Server farms are evolving in unexpected ways, creating situations we didn't encounter prior to the widespread adoption of virtualization. One of these oddities is the seemingly eternal server. How do you manage the lifecycle of a machine that never dies?
Back before we spun up VMs on a whim to handle whatever application or platform we needed, every deployment was painstaking and time consuming. These servers would be carefully built by installing the OS from the ground up, tweaking the BIOS tweaks, installing drivers, and laying the applications or frameworks over all of above. We would back up that server to tape and hope the server would reach hardware obsolescence before it broke down.
[ Paul Venezia argues it's high time we kick "virtual" to the curb. | Doing server virtualization right is not so simple. InfoWorld's expert contributors show you how to get it right in this 24-page "Server Virtualization Deep Dive" PDF guide. | Get the latest practical info and news with InfoWorld's Data Center newsletter. ]
In either case, the server that replaced this physical server would almost certainly be different, and the notion of restoring the bare-metal backup on a new physical server often meant more work than just starting fresh on the new hardware. This was especially true for Windows servers. Starting anew was a good way to clear out the cruft of years of operation and begin again with a blank slate.
In the world of server virtualization, the day for the organic refresh never arrives. Virtual servers don't break down. They don't become obsolete. They simply keep going, while the physical hardware cycles underneath them throughout their existence. In fact, the only reason to rebuild on a new VM is if the OS vendor has stopped supporting that version and there are no more security updates to be had. Even then, you'll find a great many instances where that VM will continue to run forever or until it becomes compromised.
The island of misfit servers
Looking through a collection of VMs on a midsize virtualization farm built five or so years ago, we find a wide variety of operating systems, whether we like it or not. There's a bunch of Windows Server 2003 boxes hanging around, some Windows Server 2008 systems, a plethora of Linux boxes of vastly different lineages, and entire development frameworks sitting mostly idle, but are required for update testing. More than a few Windows XP systems are sitting there, for various reasons, and even one or two Windows NT boxes support a long-deceased application that somehow hasn't been phased out.
How does this happen, you ask? Unless there's an extremely strict (and likely impossible) corporate policy on the maintenance and update of various OS versions, virtual server farm entropy is inevitable -- if it ain't broke, after all. When a new version of your chosen Linux distribution is released, do you immediately purchase upgrade licenses and go through each box, disturbing applications and services that would otherwise continue to run problem-free forever? When Windows Server 2012 is released, how long will it take you to properly test and confirm compatibility with all the applications humming along on Windows Server 2003 R2 or 2008?