Alluxio, originally known as Tachyon, is giving big data applications fast, unified access to the storage where their data resides.
Now at version 1.0, Alluxio provides frameworks like Spark, MapReduce, Flink, or Presto with access to multiple types of storage systems. Cloud storage providers Amazon S3, Google Cloud Storage, and OpenStack Swift are supported, alongside storage vendors EMC and NetApp.
From the outside, Alluxio might seem like an in-memory caching system like Memcached or Redis. Instead, it's a layer that sits between distributed computing applications and storage, giving the former access to the latter via a unified API. Applications can use Alluxio's API, which offers the highest possible speed, or they can use legacy APIs (an HDFS implementation, for instance), which are slower but more compatible.
In a blog post published earlier this month, engineers at Intel described how Alluxio helps address a few common problems with big data frameworks, such as sharing data between applications. Rather than write data to HDFS and read it back out again, users can write data to Alluxio's in-memory store and read it back out at far greater speed.
Likewise, the JVM's garbage collection and on-heap cache issues, which are exacerbated by frameworks like Spark, can be alleviated by using Alluxio. IBM has claimed that back in the Tachyon days, Alluxio outperformed in-memory HDFS by 110x for writes and "improves the end-to-end latency of a realistic workflow by 4x."
Alluxio complements other solutions; Apache Arrow, for instance, speeds up data processing by making it available to an application in a format that suits modern CPUs. Data requested by Arrow would be fetched from storage and provided by Alluxio.
In its Tachyon incarnation, Alluxio drew support from several big data projects, Spark chief among them. The company plans to continue building support from other big data projects and storage system vendors.