How to choose between custom and commodity clouds

Dramatic price drops have helped popularize cloud computing. But as Brent Bensten of Carpathia observes, big enterprise workloads often require more configurability and control

Not all cloud infrastructures are built the same way, and not all applications, services, and frameworks are built for cloud computing.

In this week's New Tech Forum, Brent Bensten, CTO of custom cloud provider Carpathia, discusses the phenomenon of "bottom dollar" cloud computing and what that means from a performance, security, and regulatory perspective -- and how to make sure the cloud resources you choose are the right tools for the job. -- Paul Venezia

Don't be fooled by the one-size-fits-all cloud

The public cloud is finally gaining acceptance. Freed from the shackles of the IT department, employees and their business units can now obtain the resources they need at a low cost of entry and hassle.

But what's good for the individual is not necessarily good for the enterprise. As familiarity with cloud architectures rises, so does the awareness that the public cloud does not suit every IT function, particularly when it comes to high-volume, low-latency applications like big data and rich media processing.

Cloud wars and the cost of commoditization

The main drawback of running high-end applications on public cloud resources is lack of customization. This is primarily due to the race-to-the- bottom pricing that top providers like Amazon, Google, Microsoft, and others have engaged in recently. It is now possible to find storage resources for about 2 cents per gigabyte per month and database operations for about a penny per 1,000 transactions.

Ultimately, this price war will influence all IT spending, whether for Web-based services or end-to-end IaaS (infrastructure as a service) ecosystems. From a purely operational perspective, it tends to mask two crucial aspects of the public cloud: First, the resources available for bottom dollar are usually low quality -- with limited availability, latent performance, and other detriments that make them unsuitable for modern production environments. Second and even more important is the fact that most public clouds are built on commodity infrastructure designed to support low-cost, scale-out, virtual architectures.

This makes sense for the cloud provider because it keeps hardware costs down and can be configured to meet the generalized computing needs of the widest array of users. The top providers long ago figured out that basic commodity hardware, much of it sourced directly from original design manufacturers, provides both the scale and the horsepower to support higher-level virtual environments.

Performance pitfalls of generalized service

For basic, noncritical applications and data, the public cloud provides a viable solution to rising volumes. But as many early adopters are finding out, it is woefully inadequate when it comes to high-order enterprise functions like data analytics and advanced business-process applications, almost all of which require highly specialized hardware configurations in order to deliver optimal performance. This level of customization is simply not possible on generic, commodity-based cloud platforms.

Big data analysis is a perfect example. Imagine a Hadoop cluster in a generic public cloud compute environment. That infrastructure could just as easily go toward Web applications, database management, or a hundred other uses. But to get top performance from Hadoop, you need to deliver the proper mix of CPU, RAM, storage, and network support depending on the nature of the loads you are running.

Optimum Hadoop performance can be achieved only with a customized, purpose-built configuration. To meet that challenge, many enterprises and government agencies seeking more powerful and cost-effective access to large-scale data processing capabilities have turned to HaaS (Hadoop as a service) solutions. Engineered to meet stringent federal requirements, HaaS solutions can processes data many times faster than commodity-built Hadoop platforms and ensure mission-critical availability to meet the performance and compliance demands of any organization. Such approaches provide a viable solution for matching the level of management, support, and reliability you expect from the rest of your IT and cloud infrastructure.

Balancing security and compliance

Lack of customization presents challenges beyond poor performance. Security and compliance issues are starting to crop up as well, particularly in such highly regulated areas as health care and government contracting.

HIPAA compliance can be particularly problematic in the public cloud. The latest rule changes greatly enhance patient privacy and confidentiality when it comes to the storage and sharing of personal information. By nature, public clouds are built on a shared infrastructure model with high levels of multitenancy on virtual compute, network, and storage systems. There may be some circumstances in which this will satisfy HIPAA requirements, but in most cases it will fall short. This means there is no way to prevent a data breach without a sophisticated security regime on either the hypervisor or even the application and data layers.

1 2 Page
Join the discussion
Be the first to comment on this article. Our Commenting Policies