A few months ago, I wrote about the benefits of getting a direct connection to the cloud. If you're considering heavy usage of public cloud services (especially IaaS and hybrid IaaS), having a direct connection onto your cloud service provider's network can make an enormous difference to the user experience. This is especially true for latency-sensitive applications, such as server-based computing (Terminal Services and Citrix VDI), and bandwidth-hungry use cases, such as data replication and cloud backup.
While you might be able to justify the cost -- or even show a savings -- of having a dedicated, high-bandwidth connection to a cloud service provider when running a large infrastructure, smaller businesses are much less likely to be able to make that math work. Instead, like the bulk of cloud users, most small businesses are forced to use commodity Internet access to reach resources they've decided to move into the cloud.
This may not be so bad if you have access to cheap, high-quality bandwidth or if your use case isn't particularly latency- or bandwidth-sensitive (Web-based SaaS applications like Salesforce.com might fall into this category). If you're not so lucky, it's difficult to know ahead of time how your user experience will be impacted. The surest way to know is to try it, and thankfully, many cloud providers will give you a trial run of their services so that you can do so. Unfortunately, if you want an accurate answer, "trying it" could mean migrating entire on-premise applications into the cloud -- a time-consuming process.
If you find yourself in that boat, there are still relatively accurate ways to get an idea of what to expect if you move on-premise services to the cloud. The two-part process involves first measuring the end-to-end quality of the connectivity you have into the cloud provider you expect to use, then simulating those conditions on your own network -- without moving anything anywhere. (I'll cover the second part in this column next week.)
But not all use cases are solely interested in how much data can be moved. Citrix ICA, Teradici PCoIP, and Microsoft RDP (all remote display presentation protocols used in server-based computing applications) are very latency-sensitive. Because these protocols allow you to access a desktop environment remotely, simple things like typing a letter on your keyboard and seeing the resulting character on the remote session's screen are affected by latency. Very poor latency can render these applications challenging at best and entirely unusable at worst.
Packet loss and delay. Because the Internet is ever changing, neither throughput nor latency are stable, known figures. In the case of throughput, transient congestion or packet loss can cause short-term restrictions that will come and go without much explanation. In the case of latency, congestion and routing changes can substantially impact how much time your packets will be in flight before they reach their destination.
Jitter. With latency, there is a way to measure this variation in latency over a period of time -- a quality called "jitter." Jitter is the statistical dispersion of the latency results you see over a period of time, generally using the standard deviation of a set of round-trip times as a metric. A lower jitter indicates a stable amount of latency -- a condition far preferable to the alternative. A very high jitter can result in inconsistent user experiences and packet reordering, which in turn can place a substantial penalty on throughput.
Before you can simulate the effects of these qualities on your own network, you need to see what the conditions on the Internet look like.
Testing throughput is a bit more involved because it requires that you have two cooperating pieces of software running on both ends of the connection to accurately determine the maximum throughput. You have a variety of options. There are publicly available bandwidth testers on the Internet (Speedtest.net is one of the most popular). However, these tools really test only your last mile-connection: DSL, cable, fiber, T1s, whatever you have to your premise. As a potential cloud user, you're interested in throughput from your premise to the cloud -- not to a random bandwidth-testing server.
Often, the only way to really test this full network path is to get access to a server on the cloud provider's network and install test software on it. Fortunately, this is fairly easy and cheap -- sometimes free. With Amazon Web Services, for example, firing up a simple (and free, if you have a new account) t1.micro instance is quite effortless. Once you have access to the console of the instance, you need only download a bandwidth-testing tool, poke a hole in Amazon's firewall for it (using a security group), then get the same testing software installed at your premises.
One well-known testing tool is iperf and its graphical front-end companion jperf. To test a CentOS Linux server in the cloud with iperf and jPef, I run
yum install iperf, which downloads and installs the iperf package. Next, I start iperf using the command line
iperf -s -w 256K, which starts iperf as a server and tells it to use a 256K window size. iperf uses a default port of 5001, but you can change this using the
On a Windows workstation at my premises, I download a recent jperf package, which includes both the Java-based jperf front end and a Windows-compatible iperf command-line build. (Searching for "jperf windows" will find you a few precompiled versions, or you can get the source from SourceForge and compile it yourself.) Then I fire up jperf and tell it to run a multithreaded bandwidth test against my Amazon instance, as you can see in the screenshot below: