Open source Hive: Large-scale, distributed data processing made easy

Thank heaven for Hive, a data analysis and query front end for Hadoop that makes Hadoop data files look like SQL tables

Bottom Line

Apache Hive is a specialized execution front end for Hadoop. Hive lets you write data queries in an SQL-like language -- the Hive Query Language (HQL) -- that are converted to map/reduced tasks, which are then executed by the Hadoop framework. You're using Hadoop, but it feels like you're talking SQL to an RDBMS.

Suppose you want to run regular statistical analyses on your Web site's traffic log data -- several hundred terabytes, updated weekly. (Don't laugh. This is not unheard of for popular Web sites.) You're already familiar with Hadoop (see InfoWorld's review), the open source distributed processing system that would be ideal for this task. But you don't have time to code Hadoop map/reduce functions? Perhaps you're not the elite programmer that everyone in the office thinks you are.

What you'd like to do is dump all that information into a database, and execute a set of SQL queries on it. But the quantity of data would overwhelm even an enterprise-level RDBMS.

[ Read the InfoWorld Test Center's hands-on account of working with Amazon Elastic MapReduce and Amazon Web Services. | Keep abreast of cloud computing news by visiting InfoWorld's Cloud Computing channel. ]

This is precisely the problem that engineers at Facebook encountered. They became interested in Hadoop as a means of processing their Web site's traffic data that was generating terabytes per day, was growing, and was overtaxing their Oracle database. Though they were happy with Hadoop, they wanted to simplify its use so that engineers could express frequently used analysis operations in SQL. The resulting Hadoop-based data warehouse application became Hive, and it helps to process more than 10TB of Facebook data daily. Now Hive is available as an open source subproject of Apache Hadoop.

Inside the Hive
Written in Java, Hive is a specialized execution front end for Hadoop. Hive lets you write data queries in an SQL-like language -- the Hive Query Language (HQL) -- that are converted to map/reduced tasks, which are then executed by the Hadoop framework. You're using Hadoop, but it feels like you're talking SQL to an RDBMS.

Employing Hadoop's distributed file system (HDFS) as data storage, Hive inherits all of Hadoop's fault tolerance, scalability, and adeptness with huge data sets. When you run Hive, you are deposited into a shell, within which you can execute Hive Data Definition Language (DDL) and HQL commands. A future version of Hive will include JDBC and ODBC drivers, at which time you will be able to create fully executable "Hive applications" in much the same way that you can write a Java database application for your favorite RDBMS. (The current version of Hive -- 0.3.0 -- does have limited support for JDBC, but can only dispatch queries and fetch results.)

To install Hive, you simply install Hadoop and add a couple of download and configuration steps. (To install Hadoop, the best tutorial I've found is on Michael Noll's blog.) Or if you'd rather just get straight to testing Hive without all the installation nonsense, you can download a VMware virtual machine image with Hadoop and Hive pre-installed. The virtual machine image is featured in an excellent Hive tutorial video available at the same Web site.

hive_sm.gif
Although Hive query language (HQL) commands are usually executed from within the Hive shell, you can launch the Hive Web Interface service and run HQL queries from within a browser. You can start multiple queries, and the Web interface will let you monitor the status of each.
1 2 3 Page 1
Page 1 of 3
InfoWorld Technology of the Year Awards 2023. Now open for entries!