Checking data integrity and quality
Integrating heterogeneous data sources is a challenge that ETL developers must constantly confront.
Integrating nonrelational data sources often requires some integrity checking. Data-integrity assurance is not free. It often requires real estate in the data-staging area and customized ETL processing to enforce business rules that would otherwise be enforced by the nature of a source system relational database. This is because nonrelational data sources do not enforce referential integrity.
It’s a good practice to have integrity checks in the ETL process. The ETL process must know how to handle data anomalies in a more automatic way. The process cannot simply reject all data- integrity failures. Instead, you need to establish business
rules for different data-quality failure scenarios and implement them in the ETL process.
When erroneous data is passed through the process, sometimes you want to transform the data on the fly; load the data as is; load the data with an associated code or description describing the impact or condition of the data; or if the data is unacceptable, reject the data completely and put it into a reject file for investigation.
Don’t overuse the reject file! Reject files are notorious for being dumping grounds for data we’ll deal with later. When records wind up in the reject file, unless they are processed before the next major load step is allowed to run to completion, the data warehouse and the production system are out of sync.