Bigtable-inspired open source projects take different routes to the highly scalable, highly flexible, distributed, wide column data store
In this brave new world of big data, a database technology called "Bigtable" would seem to be worth considering -- particularly if that technology is the creation of engineers at Google, a company that should know a thing or two about managing large quantities of data. If you believe that, two Apache database projects -- Cassandra and HBase -- have you covered.
Bigtable was originally described in a 2006 Google research publication. Interestingly, that paper doesn't describe Bigtable as a database, but as a "sparse, distributed, persistent multidimensional map" designed to store petabytes of data and run on commodity hardware. Rows are uniquely indexed, and Bigtable uses the row keys to partition data for distribution around the cluster. Columns can be defined within rows on the fly, making Bigtable for the most part schema-less.
[ Read the full reviews: HBase is massively scalable -- and hugely complex | Cassandra lowers the barriers to big data | Bossie Awards 2013: The best open source big data tools | Get a digest of the key stories each day in the InfoWorld Daily newsletter. ]
Cassandra and HBase have borrowed much from the original Bigtable definition. In fact, whereas Cassandra descends from both Bigtable and Amazon's Dynamo, HBase describes itself as an "open source Bigtable implementation." As such, the two share many characteristics, but there are also important differences.
Born for big data
Both Cassandra and HBase are NoSQL databases, a term for which you can find numerous definitions. Generally, it means you cannot manipulate the database with SQL. However, Cassandra has implemented CQL (Cassandra Query Language), the syntax of which is obviously modeled after SQL.
Both are designed to manage extremely large data sets. HBase documentation proclaims that an HBase database should have hundreds of millions or -- even better -- billions of rows. Anything less, and you're advised to stick with an RDBMS.
Both are distributed databases, not only in how data is stored, but also in how the data can be accessed. Clients can connect to any node in the cluster and access any data.
Both claim near linear scalability. Need to manage twice the data? Then double the number of nodes in your cluster.
Both safeguard data loss from cluster node failure via replication. A row written to the database is primarily the responsibility of a single cluster node (the row-to-node mapping being determined by whatever partitioning scheme you've employed). But the data is mirrored to other cluster members called replica nodes (the user-configurable replication factor specifies how many). If the primary node fails, its data can still be fetched from one of the replica nodes.
Installation and setup (15.0%)
Ease of use (30.0%)
Overall Score (100%)
|Apache Cassandra 2.0||8.0||8.0||7.0||8.0||9.0|
|Apache HBase 0.94.12||7.0||7.0||7.0||7.0||9.0|
An obscure case involving dental aligners could have huge implications for the free flow of data across...
Samsung's throwing another phablet into the ring, but this one's curved on both sides
Samsung’s back with its fifth-generation phone-tablet hybrid
Even when you get big data technology right and your reports deliver actionable insight, the execs may...
Unconfirmed reports across the Internet say Oracle has let go of its Java evangelists, but the company...
It's been a great summer to be a PC hardware enthusiast. Here's everything August 2015 brought, from...
Music, television, fitness, and fashion have grabbed Apple's attention, but it hasn't forgotten about...