Recently, many vendors have championed hyperconverged architectures by arguing that Web-scale titans, like Google and Facebook, take similar approaches. This is incorrect. There may be commonalities between Web-scale and hyperconverged architectures, but there are also huge differences.
Now, simply because Web-scale companies don’t use hyperconverged systems doesn’t mean that you shouldn’t consider the approach. In fact you probably don’t want a Web-scale data center unless you need to reach that scale. However, you should approach hyperconverged architectures with careful technical reasoning -- rather than looking to Google or Facebook for guidance.
Before discussing Web-scale architecture, we should remember that as the right design for a small environment clearly won’t work in a large environment, the inverse is often true. Systems designed for large-scale environments often work poorly in small-scale environments. The system you design should fit the task you need. Many of the techniques used by Web-scale vendors would be inefficient if attempted by those with smaller environments. What is optimal for Google may very well be unworkable for a large enterprise.
Hyperconverged systems are marked by a few main architectural decisions. They provide a reliable hardware abstraction (the virtual disk) by replicating between machines. They consolidate storage and compute on the same machines. They design for an environment that is free of custom hardware. They design for a decoupling of software and hardware.
As I outline below, Web-scale titans do not make any of these architectural decisions.
Web-scale companies don’t use reliable hardware abstractions
Hyperconverged systems provide a reliable hardware abstraction (the virtual disk over SCSI or SATA) using replication across systems. This reliable hardware layer sits below the application software.
In contrast, Web-scale vendors provide reliable software abstractions: object, filesystem, and so on. They do this because hardware abstractions, which were built for centralized environments, are extremely difficult to scale, as they impose strong consistency requirements. Meanwhile, Web-scale vendors can create software abstractions that provide reduced consistency tuned for a specific workload and thus scale while retaining performance and availability. Amazon S3, Google File System, and HDFS are all examples of distributed storage systems that provide interfaces specifically tuned for particular kinds of workloads. For example, S3 provides eventual consistency for read-mostly data, while HDFS is tuned for sequential processing compute frameworks like MapReduce.
NoSQL databases provide one of the best-known examples of this trade-off. NoSQL databases (like MongoDB, HBase, and Cassandra) relax consistency to provide highly available large-scale systems and expect unreliable individual nodes. In contrast, most traditional SQL databases (like Oracle or MySQL) provide strong ACID consistency but often have problems scaling beyond a certain point.
The use of unreliable hardware abstractions to provide reliable software abstractions leads to an application design often called “cattle applications.” In such an application, the hardware abstractions provided by virtual machines (or containers) are unreliable. To provide reliability, cattle services build reliability into the application layer itself. For example, HDFS stores multiple copies of each piece of data, but the data is separated at the file system layer, not at a drive layer.
Web-scale companies sometimes combine compute and storage ...
... but often separate them into different services.
Hyperconverged vendors argue that the storage for an application should be internalized with the compute for that application and run together on a common hardware base. While Web-scale vendors certainly place the computation and storage for given services together, they divide applications into microservices, which are separated by the network. Web-scale vendors generally make a strong distinction between services that store persistent state and services that are compute heavy and more or less stateless.
In many cases different services may have completely separate hardware to suit a particular application. For example, Amazon makes a clear distinction between instance storage, which is stored as part of EC2 and is ephemeral, and storage services. Both EBS (a block service) and S3 (an object service) are separate services accessed by EC2 instances over the network. While Amazon is not explicit about the hardware used to run them, it is clear there is at least no effort to run EBS and EC2 on co-located nodes.
Similarly, Facebook, Google, and Amazon have all stated publicly that they use heterogeneous hardware for their services to fit the appropriate hardware to the appropriate service.
While Web-scale vendors certainly co-locate the compute and storage for each service, they employ a variety of approaches for application creation, often customizing the hardware platform for one kind of service separate from others.
Web-scale companies use custom hardware
Hyperconverged vendors argue that customers should only buy off-the-shelf, mass-produced servers from a hardware integrator in order to reduce cost. This is a trade-off between cost and performance gains from using more customized hardware. Almost all vendors, whether they are appliance vendors, hyperconverged vendors, or Web-scale titans, use mass-produced, commodity components -- drives, CPUs, RAM, and so on -- to drive down the cost of acquisition and design.
However, Web-scale titans are actually the most likely to use either custom-built or heavily customized appliances. They tend to use custom appliances either because it allows them to reduce their overall cost or because it meets a specific need. Since they consume large volumes of hardware, they can afford the initial development costs.
For example, Google has publicly disclosed that it constructs its own servers with batteries built into the motherboard and hardware synchronized clocks, and it fashions its own custom networking switches.
Facebook has gone so far as to create an open compute project that discloses the custom hardware designs the company currently runs, in an attempt to create a community of other companies that use and produce the hardware they need.
Web-scale companies tightly couple software and hardware
Hyperconverged vendors argue that infrastructure software (written by a software vendor) should be decoupled from the hardware running beneath (designed by an integrator), and application software (written by an app developer) should be decoupled from the virtualization infrastructure underneath it.
This is the inverse of most Web-scale companies. Not only do Web-scale companies customize their hardware, but they build hardware, infrastructure, and applications that are heavily coupled and specifically designed for their environment.
For example, Google’s Spanner is built with a specific requirement on hardware-level synchronized clocks. Kubernetes assumes that every compute node is given its own subnet. Resource allocation in Google Borg is tied to Google’s capacity planning, while cluster definitions in Borg rely on Google’s network topologies, among other factors.
Facebook’s load-balancing approaches are heavily tied to a set of site-specific services. Memcache (used by Facebook and others) assumes hardware with huge amounts of RAM (or more recently, high-performance flash).
Amazon’s reliability designs are built around synchronous replication between geographically close data centers.
This kind of software is an “installation” in contrast with enterprise (and hyperconverged) models that think of hardware and software as portable, composable units. I borrow the term “installation” from contemporary art, where some recent artists design site-specific “installation artworks” that are intimately tied to the location where the art resides.
Installation-style design is financially challenging for companies without immense scale. However, for those with the scale to allow it, an installation approach simplifies software and hardware management because it gives the application and infrastructure software designers the ability to tune to a narrow set of hardware. This allows software designers to squeeze the most performance from the hardware they have and solve challenging software problems by using appropriate custom hardware. By customizing their entire environment, Web-scale companies can create full data center appliances (also called “data center as a computer”), rather than using individual storage or compute appliances.
For example, Google structures its infrastructure around shipping containers full of hardware. Each container has cooling, compute, storage, and networking, all built to the specific requirements of Google’s infrastructure and applications. The designers of Google’s software frameworks, like Borg, can both impact the layout and benefit from customized hardware for that specific configuration.
Web-scale companies are not hyperconverged
In conclusion, Web-scale design is different from hyperconverged design in at least four significant aspects.
- Hyperconverged architectures use replication across nodes to provide reliable hardware abstractions, while Web-scale architectures build reliability into custom software abstractions provided by applications.
- Hyperconverged architectures co-locate storage and compute. Web-scale architectures often co-locate compute and storage resources, but they use a variety of techniques to combine and separate storage and compute resources as necessary.
- Hyperconverged architectures are built around commodity hardware, while Web-scale architectures are built around heavily customized hardware.
- Hyperconverged architectures provide strong separation between software and hardware, while Web-scale architectures aggressively fit software to a specific data center design.
This doesn’t mean that hyperconverged architectures are a bad idea, only that they are very different from the architectures used in Web-scale environments. The systems in use at Web-scale vendors also come with trade-offs. In deciding what kind of architecture you need in your data center, you need to think carefully about what you need, not merely about what Google or Facebook is doing.
Brandon Salmon, Office of the CTO, has been at Tintri since 2009. He is a systems guy who loves to think about user experience, which he picked up from his doctoral work at Carnegie Mellon on distributed file systems for the home. He designed and implemented Tintri's initial algorithms for moving data between flash and disk, and he has worked on a number of areas since, most recently cloud technologies.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to email@example.com.