How hard does your network infrastructure work?

Are your servers and network gear hard at work, or hardly working? You might be surprised to find out the truth

I'm a consultant, so part of my job is to sit through meetings that hash over hardware specifications for IT projects -- most often network equipment and servers, and sometimes load-balancing and security gear, too.

At every meeting, there's always one guy who insists that the only thing that will do the job is the Fists of God platform, replete with quad-socket boxes, a terabyte of RAM in each, 10G switching, 8Gb Fibre Channel, and so on. If salespeople are in attendance, they'll be nodding at a furious rate, encouraging this extravagant thinking with ego-boosting factoids they cribbed from some overheated marketing presentation.

[ Also on InfoWorld.com: Read Paul Venezia's instant classic, "Nine traits of the veteran Unix admin." | Then, if you dare, join the debate about rebooting Unix-based systems. | Read Paul Venezia's Deep Dive PDF on virtualization networking. ]

And the reality is that the project in question would probably do fine with just 25 percent of that gear.

Being gearheads, computer geeks commonly fall into the trap of wanting the latest and greatest hardware to run whatever they're doing. Couple that with the fear of producing an underperforming solution, and that's how you wind up with $12,000 boxes serving only DHCP. It makes part of the IT hardware world go around, and there are certainly applications for 48-core servers, but in general, the hardware you purchase is far more capable than the services it's tasked with.

In most corporate IT shops, there's no simple way to forecast the actual requirements of a new application or service, but in most cases there are suitable analogues to examine that can greatly reduce the capital expenditures of a new project while sacrificing nothing. This is especially true in the virtualized server world, where resources can be added down the road without too much pain.

A good place to look for metrics is at the network core. If you're not already collecting data on how hard your network is working, stop reading this right now and start collecting that information. There's no excuse to have those particular blinders on at this stage of the game.

After you collect a few weeks' worth of data on what your network really pushes, you'll probably be surprised to find that the cores aren't really doing that much. If it's ancient 100Mb Cabletron gear, then you already know you have problems. But if it's modern stuff, you'll probably find that they're ticking over at idle, and maybe blinking slowly during the backup window. If you're using iSCSI for a virtualized infrastructure, you've probably already found whatever gremlins existed in the dedicated network gear you're using for that and are running without problems there.

Start peeling back the onion and looking at the services you provide. Databases can be worked pretty hard depending on the user load and the quality of the front-end code; storage can be beaten down by poorly written software that attacks it with tons of small reads and writes. But you may also find that the services you thought handled the heaviest loads actually run in the middle of the pack or even further behind. Meanwhile, a legacy application is kicking the crap out of a mostly forgotten four-year-old server.

Naturally, all of this boils down to proper and excruciatingly thorough network monitoring. If this doesn't exist and isn't used in every purchasing decision IT makes, then you're doing your budget -- and your organization -- a disservice.

Servers are meant to be worked. Load averages of 1.00 on Linux boxes are not bad things. I don't know how many times I've found folks who think that their Linux servers are overtasked when they sustain 0.75 1-minute load averages. If we're talking about a quad-core box in just about any socket permutation, a sustained CPU load of 3.00 would be when I'd start to think about a hardware upgrade. Anything less than that -- excluding loads derived from waiting for storage and network -- and all I'd see is a server that's doing its job with some room to grow. And with the prevalence of virtualization, you can easily add CPU and RAM resources to a virtual machine later on.

So rather than buying your sales guy a bigger boat, focus on the reality of what your plan really needs based on what you see from your existing infrastructure. Map that out, and then maybe bump the specs up a notch or two to assuage the nagging nabobs of negativity that will undoubtedly try to condemn this stance. You'll find that you can spend less in a variety of ways, from up-front cost to power and cooling, without sacrificing performance.

That's when you negotiate a raise based on overall IT cost savings -- so you can buy yourself a bigger boat.

This story, "How hard does your network infrastructure really work?" was originally published at InfoWorld.com. Read more of Paul Venezia's The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies