New open source tools for measuring cloud performance

By | February 12, 2015

For those of you developing applications on the cloud, performance is often a critical concern. It turns out that it’s surprisingly difficult to evaluate cloud offerings beyond just looking at price or feature charts. When we looked at how our own users could measure the relative performance of Google Cloud Platform, it was clear they struggled with this exact problem.

We wanted to make the evaluation of cloud performance easy, so we collected input from other cloud providers, analysts, and experts from academia. The result is a cloud performance benchmarking framework called PerfKit Benchmarker. PerfKit is unique because it measures the end to end time to provision resources in the cloud, in addition to reporting on the most standard metrics of peak performance. You’ll now have a way to easily benchmark across cloud platforms, while getting a transparent view of application throughput, latency, variance, and overhead.

We created a visualization tool, Perfkit Explorer, to help you interpret the results. We’re including a set of pre-built dashboards, along with data from actual network performance internal tests. This way, you’ll be able to play with the PerfKit Explorer without having to first input your data.

We’re releasing the source code under the ASLv2 license, making it easy for contributors to collaborate and maintain a balanced set of benchmarks. If you want something to be removed or added, we welcome your participation through github.

PerfKit is a living benchmark framework, designed to evolve as cloud technology changes, always measuring the latest workloads so you can make informed decisions about what’s best for your infrastructure needs. As new design patterns, tools, and providers emerge, we’ll adapt PerfKit to keep it current. It already includes several well-known benchmarks, and covers common cloud workloads that can be executed across multiple cloud providers.

Sample Dashboard of Compute Performance

Over the last year, we’ve worked with over 30 leading researchers, companies, and customers and we’re grateful for their feedback and contributions. Those companies include: ARM, Broadcom, Canonical, CenturyLink, Cisco, CloudHarmony, CloudSpectator, EcoCloud/EPFL, Intel, Mellanox, Microsoft, Qualcomm Technologies, Inc., Rackspace, Red Hat, Tradeworx Inc., and Thesys Technologies LLC. In addition, we’re excited that Stanford and MIT have agreed to lead a quarterly discussion on default benchmarks and settings proposed by the community.