Splice Machine advertises itself as "the only Hadoop RDBMS." The idea is to give you a transactionally correct database that has the underlying scalability features of HBase. According to its creators, Splice Machine behaves like a normal SQL RDBMS.
Constructed from a combination of plug-ins to one of Hadoop's column family databases, HBase, Splice Machine uses the coprocessor extension API, a modified version of Apache Derby, and some custom proprietary code. Splice Machine is also distribution-agnostic and can install on pure Apache, Hortonworks, or even Cloudera flavors of Hadoop.
[ Work smarter, not harder -- download the Developers' Survival Guide from InfoWorld for all the tips and trends programmers need to know. | Keep up with the latest developer news with InfoWorld's Developer World newsletter. ]
Why bend Hadoop into an RDBMS?
In interviewing Splice Machine CEO and co-founder, Monte Zweben, it became clear that the folks at Splice Machine are likeable, smart people and this is not their first rodeo (or startup, in this case). Nonetheless, I wasn't able to nail down a superclear use case.
Monte is a polished interviewee, which means it's difficult to get him off his talking points. When I asked about use cases, he gave me a few (mainly generic ones from Splice Machine's whitepaper), but I never experienced an aha moment that left me thinking "oh, a better RDBMS" or "I see, Hadoop for people who don't want Hadoop." Then again, Splice Machine is very early stage, as is much of the NewSQL space where Splice Machine resides.
At the moment, Splice Machine touts itself as both operational and analytical, which may repeat the RDBMS mistakes of the past. Few technologies have defined the industry or sold as well as the almighty RDBMS, but few have been so widely misused either.