Google whips up PerfKit tools to make cloud benchmarking easier

Assessing cloud performance is less of a struggle with PerfKit Benchmarker and Explorer's slew of open source benchmarks

cloud enterprise
Thinkstock

If you've dealt even provisionally with a cloud provider, you know their performance can vary widely and be tough to benchmark. Due to differences between providers, networks, and the meanings of benchmarks themselves, it can be tough to glean straight answers about how well your cloud is running.

Google aims to remedy this situation with a pair of benchmarking and data-visualization tools. PerfKit Benchmarker and PerfKit Explorer are designed to gather and display "end-to-end time to provision resources in the cloud, in addition to reporting on the most standard metrics of peak performance."

It might be tempting to dismiss the tools as yet another way for Google to hype its own services using metrics, but PerfKit Benchmarker and Explorer actually wrap together a number of existing, common benchmarks and are open source, to help defray those concerns.

PerfKit Benchmarker, as described earlier this week in a Google Cloud Platform Blog post, is "a living benchmark framework, designed to evolve as cloud technology changes." The whole framework is written in Python and licensed under the Apache ASLv2 license, so contributors can either add their own benchmarks or suggest improvements to existing ones. Explorer, which runs on a local machine, takes the results from Benchmarker and visualizes them in a dashboard. Results from multiple runs can be aggregated over time and thus seen in context. (Sample data is included to get an idea of how this works.)

PerfKit Benchmarker currently includes 20 existing benchmarks that test a variety of common cloud technologies -- not only network performance or basic system benchmarking, but also benchmarks for MongoDB, Cassandra, and Hadoop -- for the sake of "getting a transparent view of application throughput, latency, variance, and overhead." Instructions are included for getting the tool set running on Amazon AWS and Microsoft Azure, as well as Google Cloud Platform.

Cloud providers regularly wrangle with each other over who can do the most the fastest, but inconsistent performance across even the same provider remains a fact of life. Even worse, getting accurate results from a cloud benchmark can be deceptively difficult, since without context or analysis, raw benchmarks don't add up to much. Any tool set that makes that job less complex is welcome.

The PerfKit collection could end up becoming a standard-issue tool for IT folks who want to see how their cloud performance shapes up, but it'll be even more interesting to see cloud providers post running aggregate results (not one-shot tests) as a way to put meat on the bones of their performance claims.

Copyright © 2015 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!