In IT, there are two schools of thought on when to replace aging equipment. Most organizations depreciate IT assets over a fixed period of time and replace them when the clock ticks down. But many other organizations, especially service providers, happily ride old, gray-market IT gear until it breaks.
As with many conflicting ideologies in IT, pros on both sides of this divide tend to deride those on the other -- with accusations of running a junky, unreliable network or overspending massively flying back and forth. However, as with most seemingly black-and-white issues, both sides can learn a lot from each other.
Replacing equipment before its time is really expensive
A seeming lifetime ago, I worked for a regional ISP, then founded a Web-hosting company. In both cases, we built our own servers (before eBay was a good source for used gear), often recycling the same system case for several generations, with taped-over labels documenting the system's provenance. We also relied on used, previous-generation network gear. Without lots of investment capital, we had to work this way -- every dollar spent on new gear had to be supported by absolute necessity and monthly income.
The obvious cost of doing business in this manner was that things could (and did) die unexpectedly -- usually without a warranty to fall back and, thus, no quick replacements. As insurance, we kept equally cheap spare gear configured on the shelf or built the systems to be mutually redundant; the failure of one system and replacement with another became a relatively frequent and almost expected event. We could live with this because, in a service provider environment with an almost exclusively IT-oriented staff, almost everyone was skilled enough to quickly execute these infrastructural changes.
You're viewing Insider content