Getting sentimental over hardware

The cliché 'if it works, don't fix it' doesn't necessarily apply to servers and switches, as a monster test lab rebuild illustrates

I've embarked on a task that I've been dreading for quite some time now: I'm completely restructuring my lab.

While this may seem like a little thing to the uninitiated, the reworking of a functioning computer test lab results in major upheaval. Granted, the casual observer might mistake the current condition of my lab for upheaval, with its mountains of server shipping boxes and its ridiculous Ethernet spaghetti behind the racks, but I'm talking about real disruption.

[ Also on Read Paul Venezia's instant classic, "Nine traits of the veteran Unix admin." | Or see if you qualify for the title of certified IT ninja. ]

It'll take me a month of swearing and being generally pissed off before I'm done, followed by many more months of the same when I can't find parts that I used to be able to locate immediately or realize I have to spend an hour rebuilding some infrastructure that used to exist before I tore it down.

But the upshot is that even though many lab infrastructure components have been virtualized for years and years, the hardware running those VMs is being retired. That means I'll suddenly have gobs of free rack space, faster VMs, and a much lower power and cooling bill. That alone makes the whole endeavor worth it.

One box headed for retirement is a Dell PowerEdge 2800 that's been running a dozen VMs for at least six years now. When it was new, it was the very model of a modern major server: a massive black box weighing about a ton, with eight hot-swap U320 disk bays, two single-core 3.6GHz Intel Xeon CPUs and 8GB of RAM. It has performed beautifully, running without complaint the entire time. Some of the VMs on that box have 900-day uptimes. It's been stuck at VMware ESX 3.5 for years as I never felt the need to upgrade it.

In short, it's been exactly what it should have been -- a stable, reliable cornerstone for lab resources (the VMs on that box are an array of domain controllers, PXE boot hosts, DNS and DHCP servers, and other static lab services). Its reward for this service? It gets replaced by a tiny little box running two six-core AMD Opteron 4000-series CPUs that will not only provide a performance bump (albeit at 2.2GHz per core), but also sips power and takes up about a tenth of the physical space.

I do consolidation work all the time, but reworking the lab has really impressed on me how far IT has come in the past five years. Every aspect of the data center (or lab, in this case) is shrinking, while providing bigger, better, faster, and more. Where I used to run an old SNAP Server 18000 with eight 250GB SATA drives to store the terabytes of ISO images, local Linux distro update repositories, and the rest of my bag of tricks, a little Synology DS410 box with four 2TB SATA drives can handle that job and more -- and it's smaller than a breadbox. The six-port gigabit modules in the Cisco 4506 look downright silly when compared to just about any modern gigabit switch or gigabit blade. These units were all at the top of the heap back in their day.

Naturally, the lab is full of cutting-edge hardware, but those are the units under test. They get built up and torn down constantly to support different tests and cannot be used for any type of permanent or semi-permanent function. When I need to run tests using iSCSI storage, the EqualLogic array can't do anything else; otherwise, the test results would be skewed -- it gets used for a single test at a time. The same goes for the big Operton, Westmere, and Nehalem-EX servers, so there's plenty of fresh, new top-of-the-line gear ... being supported by hardware many generations behind.

As with just about any vocation, it's different when you apply your core skills to your own projects. I can completely dismantle a corporate infrastructure and rebuild it to support more services at less cost with new gear and virtualization technologies without thinking twice -- but the very thought of putting that PowerEdge 2800 out to pasture somehow affects me in a different way.

That's actually a very good argument for bringing in a consultant for large infrastructure overhauls, even if they're just there to observe and comment. Those who work with the gear every day can develop blinders when it comes to critical components that have proven their worth over the years, but aren't cost effective to run anymore. If it ain't broke, don't fix it... but then again I've always been a fan of firing hardware before it quits.

And so I'm off to rip through vast quantities of Cat5e patch cables, spare U320 SCSI hard drives, and the other detritus that gathers over the years in a busy computer lab. At the end of it all, I'll look around and realize that the refresh was a great idea, and it will be many years before I need to do it again. Then I'll promptly discover I can't find the Neterion 10G card that was right over there last week and get pissed off with no one to blame but myself.

So it goes.

This story, "Getting sentimental over hardware," was originally published at Read more of Paul Venezia's The Deep End blog at For the latest business technology news, follow on Twitter.

Copyright © 2011 IDG Communications, Inc.

How to choose a low-code development platform