Google's cloud benchmarking tool ups its game

PerfKit hits 1.0, offering new tests and a stable programming framework for running benchmarks on most every cloud out there

Google's cloud benchmarking tool ups its game

Google's PerfKit toolset for benchmarking cloud environments was originally released earlier this year in a pre-1.0 version. Today, it's officially been bumped to a 1.0 release, with expanded support for various cloud providers and automation of 26 different benchmarks, up from the 20 originally provided.

Given how tough it can be to reliably benchmark any cloud, having an open source, cloud-agnostic toolkit to help make it happen is a net boon.

Google devised PerfKit as a way to benchmark a variety of different cloud resources. It doesn't just clock network speed or CPU, but the performance of real-world applications that are often part of cloud deployments. As such, MongoDB, Cassandra, and Hadoop were included in the original PerfKit package.

PerfKit emphasizes programmability and extensibility, since it controls every phase of the testing -- config, provisioning of resources, execution, teardown, and publishing of the results -- with Python scripts. The tester creates YAML files that describe how the tests are to be performed, with abstractions for needed resources like disk space, networking, firewalls, and VMs.

The target environment can be a standalone system, a VM in a private cloud, or a VM on one of nine popular cloud providers: AliCloud, Amazon Web Services, CloudStack, DigitalOcean, Google Cloud Platform, Kubernetes, Microsoft Azure, OpenStack, and Rackspace.

The 1.0 label is only now being applied because Google needed to find "the right abstractions making it easy to extend and maintain," and "the right balance between variance and runtime," according to Google's blog post.

A few new benchmarks have also been added to the mix, mainly EPFL EcoCloud Web Search and Web Serving. The former sets up an instance of the Nutch search engine (based on Lucene) and tests the system in question against simulated client traffic; the latter configures the Nginx Web server and benchmarks traffic to a synthetic Web application.

Another addition that came along the road to 1.0 is the ability to run benchmarks inside a Docker container. Its currently implementation is a little limited, though; the only container currently supported is the current Ubuntu image, although that image can be hosted on most any VM.

Copyright © 2015 IDG Communications, Inc.