You can't have a conversation in today's business technology world without touching on the topic of big data.
Simply put, it's about data sets so large-in volume, velocity and variety-that they're impossible to manage with conventional database tools. In 2011, our global output of data was estimated at 1.8 zettabytes (each zettabyte equals 1 billion terabytes). Even more staggering is the widely quoted estimate that 90 percent of the data in the world was created within the past two years.
[ InfoWorld's Peter Wayner looks at enterprise Hadoop: Big data processing made easier. | Discover what's new in business applications with InfoWorld's Technology: Applications newsletter. | Get the latest insight on the tech news that matters from InfoWorld's Tech Watch blog. ]
Behind this explosive growth in data, of course, is the world of unstructured data. At last year's HP Discover Conference, Mike Lynch, executive vice president of information management and CEO of Autonomy, talked about the huge spike in the generation of unstructured data. He said the IT world is moving away from structured, machine-friendly information (managed in rows and columns) and toward the more human-friendly, unstructured data that originates from sources as varied as e-mail and social media and that includes not just words and numbers but also video, audio and images.
Given the rise of big data, I'm sure you're hearing the buzz around Apache Hadoop, the software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes (a thousand terabytes) of data. It certainly looks like the Holy Grail for organizing unstructured data, so it's no wonder everyone is jumping on this bandwagon. A quick Web search will show you that in just the past few months, companies including EMC, Microsoft, IBM, Oracle, Informatica, HP, Dell and Cloudera (to name a few) have adopted this software framework.
What I find even more notable is that companies such as Yahoo, Amazon, comScore and AOL have turned to Hadoop to both scale their businesses and lower storage costs.
According to some recent research from Infineta Systems, a WAN optimization startup, traditional data storage runs $5 per gigabyte, but storing the same data costs about 25 cents per gigabyte using Hadoop.
That's one number any CEO will remember.
So get ready for Hadoopalooza 2012. I'd love to hear what you're doing to tackle big data storage, so please drop me a line anytime.
Michael Friedenberg is the president and CEO of CIO magazine's parent company, IDG Enterprise. Email him at firstname.lastname@example.org.
Read more about data management in CIO's Data Management Drilldown.
This story, "Why big data means a big year for Hadoop" was originally published by CIO .