Best practices for developing data-integration pipelines

Because data and analytics are more critical to business operations, it’s important to engineer and deploy strong and maintainable data pipelines

Become An Insider

Sign up now and get FREE access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content. Learn more.

Data-integration pipeline platforms move data from a source system to a downstream destination system. Because data pipelines can deliver mission-critical data and for important business decisions, ensuring their accuracy and performance is required whether you implement them through scripts, data-integration and ETL (extract transform, and load) platforms, data-prep technologies, or real-time data-streaming architectures.

When you implement data-integration pipelines, you should consider early in the design phase several best practices to ensure that the data processing is robust and maintainable. Whether you formalize it, there’s an inherit service level in these data pipelines because they can affect whether reports are generated on schedule or if applications have the latest data for users. There is also an ongoing need for IT to make enhancements to support new data requirements, handle increasing data volumes, and address data-quality issues.

If you’ve worked in IT long enough, you’ve probably seen the good, the bad, and the ugly when it comes to data pipelines. Figuring out why a data-pipeline job failed when it was written as a single, several-hundred-line database stored procedure with no documentation, logging, or error handling is not an easy task. So, when engineering new data pipelines, consider some of these best practices to avoid such ugly results.

Apply modular design principles to data pipelines

As a data-pipeline developer, you should consider the architecture of your pipelines so they are nimble to future needs and easy to evaluate when there are issues. You can do this modularizing the pipeline into building blocks, with each block handling one processing step and then passing processed data to additional blocks. ETL platforms from vendors such as Informatica, Talend, and IBM provide visual programming paradigms that make it easy to develop building blocks into reusable modules that can then be applied to multiple data pipelines.

Moustafa Elshaabiny, a full-stack developer at CharityNavigator.org, has been using IBM DataStage to automate data pipelines. He says that “building our data pipeline in a modular way and parameterizing key environment variables has helped us both identify and fix issues that arise quickly and efficiently. Modularity makes narrowing down a problem much easier, and parametrization makes testing changes and rerunning ETL jobs much faster.”

To continue reading this article register now