Two key reasons deploying to the cloud is different

Visualizing a cloud deployment as a traditional infrastructure often sheds light on areas you may not have considered

When most people look at migrating to the cloud, they're primarily concerned with cost and performance. As much as we'd all like to be able to focus on critically important aspects of cloud computing such as security, availability, and data governance, the questions of how much it costs to run and whether it can keep up with the workload and provide the right features often steal the spotlight.

It's not hard to see why this is the case. Cost and performance are constants that, if out of line with expectations, become problems immediately, whereas considerations like security and availability rise to the top of the pile only when something goes wrong. It's also not always clear to cloud users that some aspects of operating in the cloud are still their responsibility rather than the service provider's.

The easiest way to avoid this trap and make sure that all aspects of the infrastructure are carefully considered is to visualize the cloud infrastructure as you would your on-premises infrastructure. Yes, the cloud is a fundamentally different beast than a traditional infrastructure, but it is also similar in many ways. Although it's true that the cloud service providers you've opted to use may perform many tasks you would have done on premises, you still need to know how they're doing it and make sure your -- and your stakeholders' -- expectations are adjusted appropriately.

Two areas of concern to compare
To help you do just that, I've picked two areas of concern where a cloud deployment needs deeper consideration compared to a typical on-premises infrastructure: user access and disaster recovery.

User access. It doesn't matter how amazing your applications are or how resilient the infrastructure that operates them is. If your users or customers can't access them, they might as well not be up. In a traditional on-premises infrastructure, you might make sure that your campus LAN is deployed with as much redundancy as possible, that sufficient spare desktop hardware is available, and that users can operate remotely if they can't reach the office.

In the cloud, you have the exact same overarching problem to consider, but with a few new twists. Not only do you still need to guarantee the campus LAN is as redundant as it was (that requirement hasn't changed), you also need to make sure your campus and remote workers can access the applications you've moved to the cloud. Most organizations rely on the open Internet for this, but some organizations with stringent latency or substantial bandwidth requirements might opt for a direct connection to the cloud.

In almost every instance, a mission-critical cloud-deployed application will force more dollars to be spent ensuring rock-solid connectivity to that application. That might mean additional circuits for greater redundancy, higher-bandwidth circuits to satisfy increased load, better-quality circuits that have more favorable latency, or a combination of all three. You might even need to invest in extra services on the cloud side to ensure that a failure within the cloud can be handled gracefully.

Disaster recovery. In traditional infrastructures, you might deploy redundant servers, storage, and network gear -- even fully redundant data centers in different states -- to ensure that your applications never experience business-crippling downtime. Building in the cloud is absolutely no different in concept. The practical difference is that your ability to influence what disaster recovery capabilities are in place may be more limited and the tools you'll have at your disposal will work differently.

For example, imagine I have a mission-critical Web app deployed on premises that I want to move into the cloud. My on-premises deployment consists of a few database servers, several Web servers, and a load-balancing tier -- all of which depend on common storage and networking layers. For redundancy, I've deployed an exact replica infrastructure at another site owned by the company and done the necessary footwork to ensure that my users -- no matter which site they might work from -- will be able to reach the application if I have to fail over to my hot site. I have internally published SLAs that include recovery-time-objective and recovery-point-objective specifications, and I've ensured that my procedures and the amount of bandwidth I have between the sites is sufficient to support them. I also test my failover capabilities regularly during production to ensure they work as expected.

In the cloud, my deployment is going to look quite a bit different, even though it bears many of the same characteristics and faces many of the same challenges. Instead of having two data centers packed with storage, networking, and server gear, I'll have a bunch of (likely) virtual computing instances running on an IaaS provider. Those are all backed by storage and networking gear that I have no real visibility into and might have a SaaS load-balancing layer in front of them -- all of which are operated by my cloud provider.

The trick here is that while I have much less to worry about -- no hardware to touch, for example -- I also have far less control over what the services I have can do. In some cases, this might not matter. For example, if my cloud service provider has a published SLA for recovery from catastrophic site failure that matches my own, I might decide I can let it take the responsibility for ensuring that the infrastructure my application depends on stays up. More often, however, the cloud provider's SLA won't get anywhere near what your own stakeholders demand from you.

In those instances, you might need to color outside the lines and build your own mousetrap. That could involve performing data replications at the application layer rather than at the storage layer (because you don't control the storage layer), or it might mean engaging a third-party service to ensure that backups are made onto an infrastructure that shares no dependencies with the one that you're running in production and that you can dynamically redirect your users to a backup provider if you need to fail over or migrate.

Be realistic about the expectations you have of your cloud providers
No matter how you decide to leverage the cloud, do not make the mistake of assuming that someone has thought of everything for you.

Although just about every cloud provider out there wants to do a good job and has a vested interest in not letting you down, no cloud provider knows your business like you do, and its infrastructure is necessarily designed with a one-size-fits-all approach.

If you don't carefully consider how you would deploy the application on your premises and compare the results of your design decisions to the outcomes of the design decisions made by your cloud provider in constructing its infrastructure, you risk building expectations that your cloud provider cannot meet. It's critical to search for and identify those gaps up front so that you can find solutions to them before they become actual problems.

This article, "Two key reasons deploying to the cloud is different," originally appeared at InfoWorld.com. Read more of Matt Prigge's Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Copyright © 2013 IDG Communications, Inc.