How to validate data, analytics, and data visualizations

Application testing often ignores one crucial set of tests that is critical for any application processing or presenting data, analytics, or data visualizations

How to validate data, analytics, and data visualizations
Thinkstock

Testing applications is a maturing discipline with tools that help quality assurance teams develop and automate functional tests, run load and performance tests, perform static code analysis, wrap APIs with unit tests, and validate applications against known security issues. Teams practicing devops can implement continuous testing by including all or a subset of their automated tests in their CI/CD pipelines and use the results to determine whether a build should be delivered to the target environment.

But all these testing capabilities can easily ignore one crucial set of tests that is critical for any application processing or presenting data, analytics, or data visualizations.

Is the data accurate and are the analytics valid? Are the data visualizations showing results that make sense to subject matter experts? Furthermore, as a team makes enhancements to data pipelines and databases, how should they ensure that changes don’t harm a downstream application or dashboard?

In my experience developing data and analytics rich applications, this type of testing and validation is often a second thought compared to unit, functional, performance, and security testing. It’s also a harder set of test criteria to do for several reasons:

  • Validating data and analytics is hard for developers, testers, and data scientists that are usually not the subject matter experts, especially on how dashboards and applications are used to develop insights or drive decision-making.
  • Data by itself is imperfect, with known and often unknown data-quality issues.
  • Trying to capture validation rules isn’t trivial because there are often common rules that apply to most of the data followed by rules for different types of outliers. Trying to capture and code for these rules may be a difficult and complex proposition for applications and data visualizations that process large volumes of complex data sets.
  • Active data-driven organizations are loading new data sets and evolving data pipelines to improve analytics and decision-making.
  • Data-processing systems are often complex, with different tools for integrating, managing, processing, modeling, and delivering results.

The first-time teams present bad data or invalid analytics to stakeholders is usually the first wake-up call that their practices and tools may be needed to test, diagnose, and resolve these data issues proactively.

Understanding data lineage and data quality

Data problems are best addressed at their sources and through the various data transformations performed in loading and processing the data. If the source data has new data-quality issues or if there are defects introduced to the data pipeline, it’s far more efficient to identify and resolve these early in the data-processing pipeline.

Two practices and related tools help with these issues. Both enable development and data teams to identify data issues before they reach downstream data visualizations and applications.

The first practice involves data-quality tools that are often add-on capabilities to extract, transform, and load (ETL), as well as some data-prep tools. Data-quality tools serve multiple purposes, but one thing they can do is identify and correct for known data issues. Some corrections can be automated, while others can be flagged as exceptions and sent to data stewards to correct manually or to update the cleansing rules.

Informatica, Talend, IBM, Oracle, Microsoft, and many others offer data-quality tools that plug into their ETL platforms, while data-prep tools from Tableau, Alteryx, Paxata, Trifacta, and others have data-quality capabilities.

The second practice is data lineage. While data quality helps identify data issues, data lineage is a set of practices and tools that track changes to data and underlying implementations. They help users understand where in the data life cycle a transformation, calculation, or other data manipulation is implemented. Data-lineage tools, reports, and documentation can then be used to trace back into a data pipeline and help pinpoint where in a data flow a defect or other problem was introduced.

Using golden data sets to validate data visualizations

Analytics, dashboards, and data visualizations don’t operate on static data sources. The data is changing at some velocity, and at the same time developers and data scientists may be modifying the underlying data flows, algorithms, and visualizations. When you’re looking at a dashboard, it’s difficult to separate whether an unanticipated data issue is due from a programmatic change or if it’s related to data or data-quality changes.

One way to isolate changes is to separate a known goldendata set to help validate data flow, application, and data visualization changes. Using a golden data set, a testing team can define unit, functional, and performance tests to validate and compare outputs. Testers can run A/B tests, where A is the output before implementation changes were introduced and B is the output after the changes were made. The test should only show differences in output in expected areas where the data flows, models, analytics, business logic, or visualizations were changed.

While this is a relatively simple concept, it’s not trivial to implement.

First, teams have to create the golden data sets and decide what volume and variety of data constitutes a comprehensive sample set to test. It also may require multiple data sets to help validate different data segments, boundary conditions, or analytical models. One tool that that can help teams manage test data is Delphix for test-data management; other vendors also offer this capability.

Second, once golden data sets are created, testing teams may require additional environments or tools to switch the underlying data sources in their environments. For example, testers may want to test against the golden data sets, then run a second time against data that is a replica of production data. Teams operating in cloud environments and using infrastructure-as-code tools like Puppet, Chef, and Ansible can construct and tear down multiple testing environments for these different purposes.

Last, testing teams need tools to implement A/B testing of data and results. Many teams I know do this manually by writing SQL queries and then comparing the results. If the data sets and tests are simple, this approach may be sufficient. But if multiple points in the data flow need to be tested, you likely need dedicated tools to centralize test queries, automate them, and use reports to validate changes. One tool, QuerySurge, is specifically designed for implementing A/B testing against data flows, databases, and some business intelligence tools.

Working with subject matter experts efficiently

At some point, you must involve subject matter experts to use new and updated data visualizations and provide feedback. They must help answer questions on whether the analytics is valid and useful to develop insights or aid in data-driven decision-making.

The problem many teams face is getting sufficient time from subject matter experts to participate in this testing. This can be a significant challenge when trying to test and deploy changes frequently.

To use their time efficiently, I recommend three separate activities:

  • Implement as much of the data quality, data lineage, and A/B testing as possible on golden data sets. Before getting subject matter experts involved, make reasonable efforts to validate that raw and calculated data is correct. This needs to be done with confidence so you can explain and ideally illustrate to subject matter experts that the underlying data, transformations, and calculations are accurate—so can be confident they don’t need to invest significant time to manually test it.
  • Design data visualizations to help subject matters experts review and validate the data and analytics. Some visualizations can be outputs from the A/B tests, while others should be visualizations that expose low-level data. When implementing larger-scale data, algorithm, model, or visualization changes, it often helps to have these quality-control data visualizations in place to help subject matter experts perform quick validations.
  • You want subject matter experts to perform user acceptance testing (UAT) on the finalized applications and data visualizations. By the time they reach this step, they should have full confidence that the data and analytics are valid.

This last step is needed to determine whether the visualizations are effective in exploring the data and answering questions: Is the visualization easy to use? Are the correct dimensions available to drill into the data? Does the visualization successfully help answer the questions it was designed to answer?

At this point in the process, you are testing the user experience and ensuring the dashboards and applications are optimized. This critical step can be done far more efficiently when there is understanding and trust in the underlying data and analytics.

Copyright © 2019 IDG Communications, Inc.