As more organizations deploy Hadoop to analyze vast reams of information, they may find they need to transfer large amounts of data between Hadoop and their existing databases, data warehouses and other data stores. Now the volunteer developers behind a new connector designed to speed this data exchange have gotten the full support from the Apache Software Foundation.
Apache has promoted the Sqoop bulk data transfer tool as a top-level project, the organization announced Monday.
[ Explore the current trends and solutions in BI with InfoWorld's interactive Business Intelligence iGuide. | Keep up with the latest approaches to managing information overload and staying compliant in InfoWorld's Enterprise Data Explosion newsletter. ]
As a top-level project (TLP), Sqoop will get the full support of the Apache support infrastructure, including mailing lists, collaborative work space, legal aid and a code repository. TLP status also indicates the Sqoop working group follows Apache's process and principles for developing and maintaining the software.
Sqoop provides a way to quickly transfer large amounts of data between the Hadoop data processing platform and relational databases, data warehouses, and other nonrelational data stores. It can work with most modern relational databases, such as MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and IBM DB2, as well as enterprise data warehouses.
Sqoop was designed to transfer billions of rows into Hadoop in a speedy parallel fashion, said Arvind Prabhakar, the Apache Sqoop project leader, in a statement. Sqoop places the data either directly into storage space governed by the Hadoop Distributed File System, or can pipe it to other Hadoop applications such as the HBase big table data store, or the Hive Hadoop data warehouse software.
Currently at version 1.4, Sqoop has already been adopted in production duty by several Hadoop shops. Online marketer Coupons.com uses the software to exchange data between Hadoop and the IBM Netezza data warehouse appliance. The organization can query its structured databases and pipe the results into Hadoop using Sqoop. Education company Apollo Group also uses the software not only to extract data from databases but to inject the results from Hadoop jobs back into relational databases.
Sqoop first became an Apache incubator project in 2011.
Founded in 1999, the not-for-profit Apache supports more than 150 open source projects, including such widely used software as the Apache Web server, the Tomcat application server, the Cassandra database, the Lucene search engine, the Perl programming language, and the Hadoop data analysis platform. Facebook, Google, IBM, Hewlett-Packard, Microsoft, VMware, and Yahoo are among the companies that financially support the ASF.