For many technologists, big data is no longer a distant echo on their radar. It no longer represents unrealistic expectations of executives, for it has become part of their daily job, part of their IT infrastructure, to a point where these technologists no longer differentiate "big data" from what they used to call "information management" or just "data".
Still, big data has been, and remains, the most hyped technology in IT -- ever. I recall the client-server hype, the "Year 2K" hype, the cloud computing hype. But I don’t recall such an explosive combination of general public awareness, executive expectations, and accelerated technology frenzy. Of course, the Year 2K "bug" garnered lots of media coverage, but the solution to it was very low-tech: check and fix these Cobol date fields. Yes, cloud computing changed the IT architecture, but what the general public knows about the cloud is that it’s where you backup your photos (and hackers steal them).
According to analyst firm Gartner, 73 percent of organizations have either invested or plan to invest in big data in the next two years. This number, from a survey performed earlier this year, is (unsurprisingly) up from 64 percent one year ago. For a whole three quarters of IT executive, big data is, or will soon be, part of their everyday life at work.
But what is big data in the first place? Intuitively, one would reply "lots of data". They would be correct of course -- volume is one characteristic of big data. But so is variety (styles, types, sources) and velocity (differences in cycles, real-timeliness). These constitute the 3 "V" of big data. However, big data is not only about the data, it is also about what you do with it and how you do it. In today’s hyped IT world, confusion is high between the data itself, and the technologies used to gain value from this data.
Organizations have already embraced, or will soon embrace the use of more data: social data, sensor data, historical data, transaction data, log data, and more. As they attempt to increase customer retention or product stickiness, as they optimize processes and discover new sources of value, every data point, every piece of knowledge, is important.
Even though the terminology "real-time enterprise" is now dated, many of these organizations are tending toward a more timely use of their data. Demands for more variety inevitably drive data volumes to increase. Higher volumes, higher velocity, and higher variety of data all call for new technologies, built for that purpose, and that scale both technically and economically.
This is where the past three to five years have made a huge difference. The emergence of Hadoop, first as a batch engine, and subsequently as a multi-purpose, multi-workload data storage and processing platform, has democratized the technologies than can handle the volume, variety and velocity of data. NoSQL databases come from the same vein, as do enterprise-focused platforms such as SAP Hana.
The same way than larger, faster and more diverse data has found (or is finding) its place in a broader information landscape, the "big data" technology stack is now merging with the existing IT stack.
When what is now commonly called "big data" becomes no more special than other data, when the technologies used to store and process this data truly become commonplace, then will the hype around big data start decline, and fall from grace.
But not to worry. Only the hype will decline and fall. What is now big data, and big data technologies, will remain. And just be called "data".
This article is published as part of the IDG Contributor Network. Want to Join?