One aspect of deep IT that has always intrigued and delighted me is the fact that it's always changing. There are always new technologies, new standards, new languages, new methodologies, new everything. IT tends to completely reinvent itself at least twice a decade, and even the so-called minor changes can be extremely significant.
It's the flow of moving from Wi-Fi as a novelty to Wi-Fi as a basic requirement of most networks. It's building a data center from scratch, and completely rebuilding that data center (at least logically) within a few years. We strive for and deliver stability in an inherently unstable environment. We constantly feel the thrill of the chase, the push for newer, bigger, faster, more.
[ Also on InfoWorld: Get out of your career rut! Check out the tech jobs still begging to be filled and the 10 U.S. cities with the highest-paying IT jobs. | Get sage advice on IT careers and management from Bob Lewis in InfoWorld's Advice Line blog and newsletter. ]
But while we research and embrace new ways of handling business problems and challenges, we tend to establish a baseline for ourselves somewhere along the way. For instance, an IT pro who was extremely well-versed in how to build and maintain physical data centers is now likely to be extremely well-versed in building and maintaining virtual data centers. It's a professional necessity to move with the times.
However, that same IT pro might have developed a set of skills during their formative years that have become ingrained and reflexive. They've encountered the same basic problem so often that they have almost involuntary reactions to it and let their animal brain take over to fix the problem, even if there are newer and better ways to achieve the same goal.
I'd say that this tendency is more visible and significant on the Unix side of the house rather than the Windows, storage, or networking domains. This is probably due to the fact that Unix-like operating systems have an extremely long and stable history, and the tools that were in use 20 years ago are just as relevant today as they were back then. The same cannot be said for Windows or storage or, to a lesser degree, networking.
A small example of this might be your basic Unix shell. There are myriad shell options out there, from ash to zsh, but bash seems to be the most common choice. It's a fluid, easily understood shell with a ton of support on nearly every conceivable platform. It offers a quick learning curve to get started, but has significant power on the other end to perform complex tasks that, frankly, should probably not be undertaken with a shell but done with an actual programming language.
Bash has evolved greatly over the years. But those who grew up using bash in its adolescence found ways to use the shell and the accompanying userland tools in a particular way. So as bash and other small tools have evolved, many admins stopped paying attention to those advances and continue using their tried-and-true methods. To be fair, those methods continue to work. But the fact remains that in some way the technology has advanced beyond their understanding, if for no other reason than they simply don't have the time, inclination, or need to evolve with it.
This is not the case with Windows, primarily because Windows server management tends to funnel people through a narrow window of available operations. There's usually only a single way to achieve a certain goal. And when Windows evolves to providing a new method, then the first becomes inoperable, forcing a change in reflexive response.
An easy example of this would be the method to uninstall an application in a Windows system. Start with Windows NT and work through Windows Server 2012. You'll find that while there are some similarities, both the procedure and the requirements have changed -- dramatically in some cases. On Red Hat Linux, by contrast, there may be various GUI wrappers built around it, but the rpm command has been doing the job for 15 years.