Last week, InfoWorld's David Linthicum posted an excellent piece on the importance of maintaining a firm grasp on IT fundamentals as you steer your career toward the cloud. In it, Linthicum argues that you can't very well expect to succeed in the cloud space without having a solid understanding of what makes traditional enterprise environments tick. He couldn't be more on the money.
However, I'd take the liberty of broadening his point to encompass what I see as a much broader trend that has emerged since server virtualization really came into its own. It used to be that a server admin building a new system would have fairly intimate knowledge of the needs of the application that it would be charged with running. If the admin got it wrong, he or she might need to rebuild the system or, worse, find funds to buy additional hardware.
Today, however, nearly any workload can be supported by an easily deployed and modified virtual machine. The need to be right about how the hardware and underlying data center infrastructure have to be configured isn't as strong because it can be changed so easily. Data center admins would seem to need to know less and less about the applications they run.
From their perspective, that app is just a collection of VMs running on a cluster and sitting in a data store -- do they really need to know what it does or why? That's supposed to be the promise of the cloud, right? Whether we're building a public or private cloud, infrastructure folks are supposed to take all the observed complexity out of our operations and allowing less infrastructure-oriented folk to pick private cloud services from a menu. I myself have argued that that's where things are headed: Consuming infrastructure, whether it's public or private, should be simple, quick, and easy.
The problem, though, is that someone has to actually bridge the gap between the infrastructure and the application. Someone has to know enough about how both work so that the infrastructure is actually configured to match the application's needs. That infrastructure/application interlocutor seems to be missing in many circumstances, whether it's a traditional IT infrastructure or a cloud-based one.
Looking beyond the façade of cloud separation
To dig into the problem, let me paint a quick example. Imagine that an enterprise is considering deploying a heavily customized, mission-critical application in the organization's shiny new private cloud infrastructure.
The enterprise in question is forward-thinking, so it has deployed a cloud-management system to run its private cloud. All the application developers need to do is head to a Web-based portal where they order a slew of virtual machines. Minutes later, those virtual machines have been provisioned and are ready for the developers to start working with. Mere days later, the application is installed and the integration work begins.
Throughout that process, the developers didn't need to know anything about how the networking or storage was configured. They just had to select a few items from a menu, fill in basic information about RAM and disk sizes, and press Go. Likewise, the infrastructure admins that operate the cloud infrastructure didn't need to know anything about the application, how it gets installed, or even what it does. Instead, they focus on making sure the infrastructure keeps humming.
The problems lurking due to mutual ignorance
At first, this sounds perfect. Everyone's skills are used to their fullest, and no one is forced to work outside their comfort zone. However, big problems might be lurking.
For example, how much do the developers know about how the infrastructure admins have configured backups? Alhough it's true that backing up a virtual machine infrastructure is very easy, protecting a database-based enterprise application requires more than making sure a backup runs at least once during every 24-hour period. Certain data consistency processes might need to be run before a backup, or perhaps backups simply need to be taken at a specific time of day. Being certain that the backups worked is also more than making sure a restored virtual machine can be fired up -- it also means making sure the application data is consistent.
Similarly, what if the cloud infrastructure is protected by a warm site in a different city? The infrastructure admins don't need to know anything about the application to ensure the appropriate SAN volumes are replicated and network traffic can be redirected to the warm site in the event of a failure. However, would they know that ordering data will become inconsistent if this application is brought back into production with data that's a few minutes older or newer than a legacy system with which it interfaces? Would the application folks even know to tell them?
This example involved a private cloud operated by an internal IT department, but the problem exists (and is perhaps even worse) when a public cloud infrastructure is involved. Instead of being ignorant of how backups and replication have been configured, the applications folks may now find themselves responsible for making sure these activities are configured, whether they know that or not. After all, very few cloud service providers will prebake that stuff for you. Why? Because they don't know what your applications require and won't needlessly make their services appear more expensive by dedicating failover or backup resources that might go unused.
At the end of the day, I don't think infrastructure and cloud admins can afford not to know anything about applications supported by their infrastructures. They might not need to know how to code new additions, but they should at least have a general idea of how they're used, how data moves within them, and their sensitivity to being restored and replicated in various ways.
Likewise, those charged with the care and feeding of the applications need to know a lot more about the capabilities of the infrastructures they run on -- an understanding that's nothing short of mandatory if a public cloud infrastructure like Amazon Web Services is being used.
Ultimately, you need IT pros who are more generally versed in both the infrastructure (whether it be public cloud, private cloud, or traditional) and the applications that run on them. Certainly, their depth and specializations can be rooted in one world or the other, but it's too dangerous to remain completely ignorant to at least have a general idea of how things fit together in the big picture.
This article, "In a cloud world, developers and admins can't ignore each other," originally appeared at InfoWorld.com. Read more of Matt Prigge's Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.