A few months ago, I wrote about the benefits of getting a direct connection to the cloud. If you're considering heavy usage of public cloud services (especially IaaS and hybrid IaaS), having a direct connection onto your cloud service provider's network can make an enormous difference to the user experience. This is especially true for latency-sensitive applications, such as server-based computing (Terminal Services and Citrix VDI), and bandwidth-hungry use cases, such as data replication and cloud backup.
While you might be able to justify the cost -- or even show a savings -- of having a dedicated, high-bandwidth connection to a cloud service provider when running a large infrastructure, smaller businesses are much less likely to be able to make that math work. Instead, like the bulk of cloud users, most small businesses are forced to use commodity Internet access to reach resources they've decided to move into the cloud.
This may not be so bad if you have access to cheap, high-quality bandwidth or if your use case isn't particularly latency- or bandwidth-sensitive (Web-based SaaS applications like Salesforce.com might fall into this category). If you're not so lucky, it's difficult to know ahead of time how your user experience will be impacted. The surest way to know is to try it, and thankfully, many cloud providers will give you a trial run of their services so that you can do so. Unfortunately, if you want an accurate answer, "trying it" could mean migrating entire on-premise applications into the cloud -- a time-consuming process.
If you find yourself in that boat, there are still relatively accurate ways to get an idea of what to expect if you move on-premise services to the cloud. The two-part process involves first measuring the end-to-end quality of the connectivity you have into the cloud provider you expect to use, then simulating those conditions on your own network -- without moving anything anywhere. (I'll cover the second part in this column next week.)
Defining quality
A wide variety of factors affect the performance of traffic crossing the open Internet. These include geographical distance, logical network distance, traffic congestion, and bandwidth bottlenecks. These factors, in addition to outages and the limitations of your own last-mile Internet circuit, conspire to define the four major quality characteristics of any network connection: throughput, packet loss and delay, latency, and jitter.
Throughput. Throughput is a fairly straightforward metric that simply indicates how much data you can move from point A to point B in a given period of time. Depending on your use case, you may be more interested in how much raw data you can move from your premise to the cloud or vice versa, but in most instances, you'll want to know both the premise-to-cloud and cloud-to-premise throughput figures. For example, in a cloud backup scenario, you're primarily interested in your upstream (premise-to-cloud) throughput, but you'll also want to know your downstream (cloud-to-premise) throughput in case you need to restore something.
Latency. If you've ever pinged anything, you've tested latency. Latency is the amount of time it takes a packet to reach its destination, then answer with a reply. This is often abbreviated to RTT (round-trip time) and is sometimes referred to as "delay." With a properly optimized TCP stack, latency should have a relatively small effect on overall throughput.