Every IT shop has its own way of doing things, but one thread ties them together: They all feel as though they have more work to do than can possibly be accomplished in the allotted time. Experts seem to think the economy is thawing out a bit, and some IT departments are starting to increase headcount, but in most cases staffing levels are just now starting to reach levels they should have hit years ago, falling short of current needs.
A good chunk of what I write about here on InfoWorld are infrastructure management tasks that I consistently see left behind in the wreckage of ambitious application rollout schedules. Though implementation timetables may be met, the price of skimping on good, old-fashioned infrastructure management and planning often gets paid later -- in the form of infrastructure instability, avoidable human error, and unnecessary trips to the corner office for capital or contract labor.
If you're responsible for enterprise IT infrastructure, take a look at this list and see how you stack up. If you're on top of everything, count yourself extremely lucky, but if you aren't, don't feel too bad. You're in good company.
Anyone who must submit a budget for the next fiscal year has to do at least some kind of technology planning. You can't have any idea how much capital to ask for if you have no idea what you will need to buy. But how often is your budgeting accurate? Do you know what your primary storage infrastructure will look like in two years? Three? Do you know when your backup infrastructure may need more resources? How about when you'll need to expand your virtualization cluster? Have you recently bought a piece of hardware or software you had to replace before you thought you'd need to replace it?
There's also a lot of work you can do to make it easier to navigate an unplanned outage -- such as building a dedicated management network that allows you to troubleshoot outages more easily.
Patching and upgrades
Everyone knows that patching servers (especially Windows systems) is critical. The same can't be said for other network-attached hardware like SANs, network devices, and built-in management controllers, some of which languish for years between patches. Security vulnerabilities for network and storage devices rarely get attention in the press, but that doesn't mean hackers are unaware of them -- so patch away!
Almost every infrastructure has some sort of edge security. But what about internal security? Have you segmented your internal network such that core infrastructure services and devices (virtualization hosts, SANs, and so on) are unavailable to users who don't need access to them? If not, you leave open the possibility that a low-impact security lapse could mushroom into a serious risk to the entire infrastructure.
Documentation and cross-training
Documentation is a dirty word. But having good, practicable documentation can save your bacon, especially if you work in a team where responsibilities are divided among different individuals. If disaster strikes when the "network guy" is on vacation, can the remaining team members find the information they need to solve the problem? I've seen lapses as simple as a mislabeled (or unlabeled) cable cause hours of troubleshooting that could have been avoided. Even a little bit of documentation can go a long way toward avoiding self-inflicted wounds.
Of all the things that fall off the back of the truck when things get busy, professional development is often first to go -- when it should be one of the last. No one who's been in IT for very long needs to be told how quickly things change and how much work it takes to stay on top of what you've already deployed, much less what you might consider adding in the near future.