Last week, InfoWorld's David Linthicum posted an excellent piece on the importance of maintaining a firm grasp on IT fundamentals as you steer your career toward the cloud. In it, Linthicum argues that you can't very well expect to succeed in the cloud space without having a solid understanding of what makes traditional enterprise environments tick. He couldn't be more on the money.
However, I'd take the liberty of broadening his point to encompass what I see as a much broader trend that has emerged since server virtualization really came into its own. It used to be that a server admin building a new system would have fairly intimate knowledge of the needs of the application that it would be charged with running. If the admin got it wrong, he or she might need to rebuild the system or, worse, find funds to buy additional hardware.
Today, however, nearly any workload can be supported by an easily deployed and modified virtual machine. The need to be right about how the hardware and underlying data center infrastructure have to be configured isn't as strong because it can be changed so easily. Data center admins would seem to need to know less and less about the applications they run.
From their perspective, that app is just a collection of VMs running on a cluster and sitting in a data store -- do they really need to know what it does or why? That's supposed to be the promise of the cloud, right? Whether we're building a public or private cloud, infrastructure folks are supposed to take all the observed complexity out of our operations and allowing less infrastructure-oriented folk to pick private cloud services from a menu. I myself have argued that that's where things are headed: Consuming infrastructure, whether it's public or private, should be simple, quick, and easy.