On Tuesday, my company, Mammoth Data, released benchmarks on Google Cloud Dataflow and Apache Spark. The benchmarks were primarily for batch use cases on Google’s cloud infrastructure. Last year, Google contracted us to implement some use cases and extract user experience data points from people experienced in this field. As a follow-on, we did a benchmark for Google to see how its technology stacked up.
Benchmarks are often a black art of vendor-driven deception. I’ve never worked with a company more concerned with avoiding that. The benchmarks we released were constructed around Google Cloud Dataflow and Spark’s batch processing capabilities. They don’t address the more rapidly developing parts of both engines: the streaming portion.
We also wanted to avoid a “best SQL predicate pushdown” comparison. Because some queries don’t distribute well, Spark and Google Cloud Dataflow push the SQL to the underlying datastore. Benchmarking that would largely be a database-tuning exercise and, in my opinion, not very productive.
What is Google Cloud Dataflow?
Google Cloud Dataflow is closely analogous to Apache Spark in terms of API and engine. Both are also directed acyclic graph-based (DAG) data processing engines. However, there are aspects of Dataflow that aren’t directly comparable to Spark. Where Spark is strictly an API and engine with the supporting technologies, Google Cloud Dataflow is all that plus Google’s underlying infrastructure and operational support. More comparable to Google Cloud Dataflow is the managed Spark service available as part of the Databricks platform.
Google Cloud Dataflow is a great choice for well-thought-out, production-ready jobs. However, the technology lacks read-eval-print loop (REPL) support, and it's bound to Google’s cloud infrastructure. Apache Beam -- the API portion of Dataflow -- is an Apache Software Foundation incubation project. Beam supports Spark, Flink, and Google Cloud Dataflow as execution engines.
What the benchmarks say
In our testing, Google Cloud Dataflow was faster than Spark by a factor of five on smaller clusters and a factor of two on larger clusters. At the moment, Dataflow is limited to 1,024 cores; this is a hard limit set by Google that we expect will change in the near future. However, even most larger organizations are running smaller workloads. (Spark has been benchmarked at 8,000 cores.)
While you can toggle how many cores you want, Google's “autoscaling” will be a major boon to companies doing large batch jobs. Assigning too many cores may actually slow a job down on any of these engines. When running in the cloud you need to think a bit more about running jobs cost-effectively. The autoscaling feature allows you to do that. During the benchmark, autoscaling provided roughly the same result as when we manually picked the right number of cores for the job.
The great big data race
It was a privilege to work with Google, one of the originators of big data technology. Ultimately, we think the clash between Google Cloud Dataflow and Spark is a “cold war” in which the users of these technologies win. As Google Cloud Dataflow adds a feature, Spark will inevitably work to one-up it and the cycle will begin again. Some people may complain about the number of choices we have in engines and APIs, but the competition is driving innovation in ways we haven’t seen in the software industry in years.
The bottom line is that Google Cloud Dataflow is an excellent option for companies looking to do production-level big data processing in the cloud. It might not be the best choice for data scientists experimenting with data due to the lack of REPL support. But with Apache Beam, you could potentially write your production code once and run it on different engines (including Dataflow, Spark, and Flink) and make your choice later. Go ahead and check out the benchmarks yourself.
Andrew C. Oliver is a professional cat herder who moonlights as a software consultant. He is president and founder of Mammoth Data (formerly Open Software Integrators), a big data consulting firm based in Durham, N.C. He also writes InfoWorld’s Strategic Developer blog.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to firstname.lastname@example.org.