Scaling up, and up
One early Hadoop adopter, Duluth, Ga.-based Concurrent, sells video-streaming systems. It also stores and analyzes huge quantities of video data for its customers. To better cope with the ever-rising amount of data it processes, Concurrent started using Hadoop CDH from Cloudera two years ago.
"Hadoop is the iron hammer we use for taking down big data problems," says William Lazzaro, Concurrent's director of engineering. "It allows us to take in and process large amounts of data in a short amount of time."
One Concurrent division collects and stores consumer statistics about video. That's where Hadoop comes to the rescue, Lazzaro says. "We have one customer now that is generating and storing three billion [data] records a month. We expect at full rollout in the next three months that it will be 10 billion records a month."
Two key limitations for Concurrent in the past were that traditional relational databases can't handle unstructured data such as video and that the amount of data to be processed and stored was growing exponentially larger. "My customers want to keep their data for four to five years," Lazzaro explains. "And when they're generating one petabyte a day, that can be a big data problem."
With Hadoop, Concurrent engineers found that they could handle the growing needs of their clients, he says. "During testing they tried processing two billion records a day for the customer, and by adding another server to the node we found we could complete what they needed and that it scaled immediately," Lazzaro says.
The company ran the same tests using traditional databases for comparison and found that one of the key benefits of Hadoop was that additional hardware could easily and quickly be added on as needed without requiring extra licensing fees because it is open source, he says. "That became a differentiator," Lazzaro says.
Another Hadoop user, life sciences and genomics company NextBio, of Santa Clara, Calif., works on projects involving huge data sets for human gene sequencing and related scientific research.
"We bring in all kinds of genomics data, then curate it, enrich it and compare it with other data sets" using Hadoop, says Satnam Alag, vice president of engineering for NextBio. "It allows mass analytics on huge amounts of public data" for their customers, which range from pharmaceutical companies to academic researchers. NextBio uses a Hadoop distribution from MapR.
A typical full genome sequence can contain 120GB to 150GB of compressed data, requiring about half a terabyte of storage for processing, he says. In the past, it would take three days to analyze it, but with 30 to 40 machines running Hadoop, NextBio's staff can do it now in three to four hours. "For any application that has to make use of this data, this makes a big difference," Alag says.
Another big advantage is that he can keep scaling the system up as needed by simply adding more nodes. "Without Hadoop, scaling would be challenging and costly," he says. This so-called horizontal scaling -- adding more nodes of commodity hardware to the Hadoop cluster -- is a "very cost-effective way of scaling our system," Alag explains. The Hadoop framework "automatically takes care of nodes failing in the cluster."
That's dramatically changed the way the company can expand its computing power to meet its needs, he says. "We don't want to spend millions of dollars on infrastructure. We don't have that kind of money available."