"It's no longer about delivering an application that is great; it's about whether that application can survive in the wild. You have to examine the maximum use the cloud-based application and network will sustain," Lanowitz says.
Get the right people involved
Jim Frey, managing research director at consultancy Enterprise Management Associates, agrees with Lanowitz. Complicating matters, his research has shown, is that IT groups don't always have the right people responsible for predicting and resolving bandwidth bottlenecks. Often, the people who know most about the network and can take steps to resolve problems before they occur aren't involved with cloud storage and applications.
Frey's February 2011 report "Network Management and the Responsible, Virtualized Cloud" found that 62 percent of the 151 IT professionals surveyed are using some form of cloud services. A majority of the total -- 66 percent -- rely on an in-house cloud or virtualization support team for service performance and quality monitoring and assurance. Other major players in cloud oversight in many shops work in storage or data management, data center/server operations and security.
But only 54 percent of those surveyed said they involve network engineering/operations personnel, down from 62 percent in 2009. Sadly, the move away from network engineering has left traditional network best practices by the wayside, according to Frey.
Cloud services and deployment of virtual server technology often result in reduced visibility and control in the enterprise, making it difficult to manage the network aspects, he contends. "There are virtual network elements that ... should be accorded the same best practices for monitoring and management as the other elements in the network connectivity path," he writes in the report.
Chief among virtual network attributes in need of attention, he later said, is bandwidth.
What's lacking at many IT shops, in his opinion, is attention to the health of overall traffic delivery. For instance, only 28 percent of survey respondents believe collecting packet traces between virtual machines for monitoring and troubleshooting is absolutely required. And only 32 percent feel that collecting data about traffic, i.e., NetFlow information, from virtual switches for monitoring and troubleshooting is absolutely required. Both tasks give IT insight into how the network and its pipes are performing.
With this knowledge, businesses could discover that they need some type of extra help, such as WAN optimization controllers (WOC) or application delivery controllers, to alleviate bottlenecks and improve the end-user experience. To prevent multiple copies of the same data from clogging pipes, IT could use de-duplication in physical and virtual WOCs deployed in-house and in the cloud. Or IT groups could cache data locally to shrink the amount of back-and-forth traffic.
Optimizing the network for data backup
John Lax, vice president of information systems for Washington, D.C.-based International Justice Mission (IJM), credits WOCs for enabling the bandwidth-challenged global nonprofit's move to the cloud.
The IJM, a human rights agency that rescues children from sex trafficking and slavery, has 500 employees and 14 field offices in 10 countries around the world. Lax says many employees endure the triple challenge of incredibly low bandwidth (e.g., 512Kbps), frail connections that frequently drop and expensive fees (a 256Kbps link in Uganda costs $1,200 per month).