Data cleaning

Data cleaning, (or data cleansing,data scrubbing) is an aspect of data processing and is the process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database. Used mainly in databases, the term refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc. parts of the data and then replacing, modifying, or deleting this dirty data.

After cleansing, a data set will be consistent with other similar data sets in the system. The inconsistencies detected or removed may have been originally caused by user entry errors, by corruption in transmission or storage, or by different data dictionary definitions of similar entities in different stores.

Data cleansing differs from data validation in that validation almost invariably means data is rejected from the system at entry and is performed at entry time, rather than on batches of data.

The actual process of data cleansing may involve removing typographical errors or validating and correcting values against a known list of entities. The validation may be strict (such as rejecting any address that does not have a valid postal code) or fuzzy (such as correcting records that partially match existing, known records).

Motivation
Administratively, incorrect or inconsistent data can lead to false conclusionss. For instance, the government may want to analyze population census figures to decide which regions require further spending and investment in health services. In this case, it will be important to have access to reliable data to avoid erroneous fiscal decisions.

Data quality
High-quality data needs to pass a set of quality criteria. Those include:


 * Accuracy: an aggregated value over the criteria of integrity, consistency, and density
 * Integrity: an aggregated value over the criteria of completeness and validity
 * Completeness: achieved by correcting data containing anomalies
 * Validity: approximated by the amount of data satisfying integrity constraints
 * Consistency: concerns contradictions and syntactical anomalies
 * Uniformity: directly related to irregularities and in compliance with the set 'unit of measure'
 * Density: the quotient of missing values in the data and the number of total values ought to be known

The process of data cleansing

 * Data auditing: The data is audited with the use of statistical methods to detect anomalies and contradictions. This eventually gives an indication of the characteristics of the anomalies and their locations.


 * Workflow specification: The detection and removal of anomalies is performed by a sequence of operations on the data known as the workflow. It is specified after the process of auditing the data and is crucial in achieving the end product of high-quality data. In order to achieve a proper workflow, the causes of the anomalies and errors in the data have to be closely considered. For instance, if we find that an anomaly is a result of typing errors in data input stages, the layout of the keyboard can help in manifesting possible solutions.


 * Workflow execution: In this stage, the workflow is executed after its specification is complete and its correctness is verified. The implementation of the workflow should be efficient, even on large sets of data, which inevitably poses a trade-off because the execution of a data-cleansing operation can be computationally expensive.


 * Post-processing and controlling: After executing the cleansing workflow, the results are inspected to verify correctness. Data that could not be corrected during execution of the workflow is manually corrected, if possible. The result is a new cycle in the data-cleansing process where the data is audited again to allow the specification of an additional workflow to further cleanse the data by automatic processing.

Popular methods used

 * Parsing: Parsing in data cleansing is performed for the detection of syntax errors. A parser decides whether a string of data is acceptable within the allowed data specification. This is similar to the way a parser works with grammars and languages.


 * Data transformation: Data transformation allows the mapping of the data from its given format into the format expected by the appropriate application. This includes value conversions or translation functions, as well as normalizing numeric values to conform to minimum and maximum values.


 * Duplicate elimination: Duplicate detection requires an algorithm for determining whether data contains duplicate representations of the same entity. Usually, data is sorted by a key that would bring duplicate entries closer together for faster identification.


 * Statistical methods: By analyzing the data using the values of mean, standard deviation, range, or clustering algorithms, it is possible for an expert to find values that are unexpected and thus erroneous. Although the correction of such data is difficult since the true value is not known, it can be resolved by setting the values to an average or other statistical value. Statistical methods can also be used to handle missing values which can be replaced by one or more plausible values, which are usually obtained by extensive data augmentation algorithms.

Existing tools
Before computer automation, data about individuals or organizations was maintained and secured as paper records, dispersed in separate business or organizational units. Information systems concentrate data in computer files that can potentially be accessed by large numbers of people and by groups outside of organization.

Google Refine and Data Ladder are just two examples of data-cleansing tools.

Criticism of existing tools and processes
The value and current approaches to Data Cleansing have come under criticism due to some parties claiming large costs and low return on investment from major data cleansing initiatives.

The main reasons cited are:


 * Project costs: costs typically in the hundreds of thousands of dollars
 * Time: lack of enough time to deal with large-scale data-cleansing software
 * Security: concerns over sharing information, giving an application access across systems, and effects on legacy systems

Challenges and problems

 * Error correction and loss of information: The most challenging problem within data cleansing remains the correction of values to remove duplicates and invalid entries. In many cases, the available information on such anomalies is limited and insufficient to determine the necessary transformations or corrections, leaving the deletion of such entries as the only plausible solution. The deletion of data, though, leads to loss of information; this loss can be particularly costly if there is a large amount of deleted data.


 * Maintenance of cleansed data: Data cleansing is an expensive and time-consuming process. So after having performed data cleansing and achieving a data collection free of errors, one would want to avoid the re-cleansing of data in its entirety after some values in data collection change. The process should only be repeated on values that have changed; this means that a cleansing lineage would need to be kept, which would require efficient data collection and management techniques.


 * Data cleansing in virtually integrated environments: In virtually integrated sources like IBM’s DiscoveryLink, the cleansing of data has to be performed every time the data is accessed, which considerably decreases the response time and efficiency.


 * Data-cleansing framework: In many cases, it will not be possible to derive a complete data-cleansing graph to guide the process in advance. This makes data cleansing an iterative process involving significant exploration and interaction, which may require a framework in the form of a collection of methods for error detection and elimination in addition to data auditing. This can be integrated with other data-processing stages like integration and maintenance.