Unit 2 Data Preprocessing (1)
Unit 2 Data Preprocessing (1)
• Data Quality
• Data Cleaning
• Data Integration
• Data Reduction
2
2
Data Quality: Why Preprocess the Data?
3
Major Tasks in Data Preprocessing <CIRT>
• Data cleaning
• Fill in missing values, smooth noisy data, identify or remove outliers,
and resolve inconsistencies
• Data integration
• Integration of multiple databases, data cubes, or files
• Data reduction
• Dimensionality reduction
• Numerosity reduction
• Data compression
• Data transformation and data discretization
• Normalization
• Concept hierarchy generation
4
Data Cleaning
• Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g., instrument faulty, human or
computer error, transmission error
• incomplete: lacking attribute values, lacking certain attributes of interest, or containing only
aggregate data
• e.g., Occupation=“ ” (missing data)
• noisy: containing noise, errors, or outliers
• e.g., Salary=“−10” (an error)
• inconsistent: containing discrepancies in codes or names, e.g.,
• Age=“42”, Birthday=“03/07/2010”
• Was rating “1, 2, 3”, now rating “A, B, C”
• discrepancy between duplicate records
• Intentional (e.g., disguised missing data)
• Jan. 1 as everyone’s birthday?
5
Incomplete (Missing) Data
What is missing data?
The problem of missing data is prevalent in most of the research areas. Missing data produces
various problems.
• The values are Missing Completely At Random (MCAR) if the missing data is
Missing Completely At completely not related to both observed and missing instances.
Random (MCAR): • An example of MCAR is a weighing scale that ran out of batteries.
• Missing Not At Random (MNAR) is data that is neither MAR nor MCAR. This implies that
Missing Not At Random the missing data is related to both observed and missing instances.
(MNAR): • Example: people with the lowest education are missing on education or the sickest
people are most likely to drop out of the study.
MAR- Females feel shy in telling their age.
MNAR- Obese people likely to miss the weight column.
Ways of handling missing data
There are many tuples that have no recorded values for several attributes, then various
methods to address missing values for the attribute are:
1. Ignore the tuple: This is usually done when the class label is missing (assuming the mining
task involves classification).
1. This method is not very effective, unless the tuple contains several attributes with
missing values. It is especially poor when the percentage of missing values per
attribute varies considerably.
2. By ignoring the tuple, we do not make use of the remaining attributes’ values in the
tuple. Such data could have been useful to the task at hand.
2. Fill in the missing value manually: In general, this approach is time consuming and may not
be feasible given a large data set with many missing values.
3. Use a global constant to fill in the missing value: Replace all missing attribute values by
the same constant such as a label like “Unknown” or - ∞. This method is simple, but it is
not foolproof.
4. Use a measure of central tendency for the attribute (e.g., the mean or median) to fill in
the missing value: For normal (symmetric) data distributions, the mean can be used. If
the data distribution for a given class is skewed, the median value is a better choice.
5. Imputation; predicting values to fill in the missing values: This can be achieved by
imputation method such as linear regression, linear interpolation, etc.
6. Imputation using algorithms which support missing values: Many machine learning
algorithms fail if the dataset contains missing values. However, algorithms like K-nearest and
Naive Bayes support data with missing values.
NOTE: You may end up building a biased machine learning model which will lead to incorrect results if the
missing values are not handled properly.
Using Algorithms
Deleting Rows/list- Replacing With Predicting The Missing
Which Support Missing
wise deletion Mean/Median/Mode Values
Values
•Cons:
•Loss of information and data •Cons:
•Cons:
Cons: •Bias also arises when an incomplete
•Works poorly if the •Is a very time consuming process
percentage of missing values is Add variance and bias conditioning set is used for a
and it can be critical in data mining
categorical variable
high (say 30%), compared to where large databases are being
the whole dataset •Considered only as a proxy for the extracted
true values
Noisy Data
• Noise: random error or variance in a measured variable
• Noisy data is meaningless data. The term has often been used as a synonym for corrupt data.
• Various data visualization techniques like Boxplot and Scatter plots can be used to identify outliers
which may represent noise.
13
How to Handle Noisy Data?
• Binning
• first sort data and partition into (equal-frequency) bins
• then one can smooth by bin means, smooth by bin median,
smooth by bin boundaries, etc.
• Regression
• smooth by fitting the data into regression functions
• Clustering
• detect and remove outliers
• Combined computer and human inspection
• detect suspicious values and check by human (e.g., deal with
possible outliers)
14
Binning
Smoothing by bin means- each value in a bin is replaced by the mean value
of thebin.
Smoothing by bin median- each bin value is replaced by the bin median.
Smoothing by bin boundaries - the minimum and maximum values in a given
bin are identified as the bin boundaries. Each bin value is then replaced by the
closest boundary value.
In this example, the data for price are first sorted and then partitioned into equal-
frequency bins of size 3 (i.e., each bin contains three values).
Regression
Data smoothing can also be done by regression, a technique that conforms
data values to a function.
Linear regression involves finding the “best” line to fit two attributes (or
variables) so that one attribute can be used to predict the other.
Multiple linear regression attempts to model the relationship between
two or more explanatory variables and a response variable by fitting a
linear equation to observed data. Every value of the independent
variable x is associated with a value of the dependent variable y.
Outlier Analysis
Outliers may be detected by clustering, for example,
where similar values are organized into groups, or
“clusters.”
• Χ2 (chi-square) test
(Observed Expected) 2
2
Expected
• The larger the Χ2 value, the more likely the variables are related
• The cells that contribute the most to the Χ2 value are those
whose actual count is very different from the expected count
• Correlation does not imply causality
• # of hospitals and # of car-theft in a city are correlated
• Both are causally linked to the third variable: population
23
Chi-Square Calculation: An Example
Scatter plots
showing the
similarity from
–1 to 1.
26
Correlation (viewed as linear relationship)
Correlation coefficient:
where n is the number of tuples, andA are the B respective mean or expected
values of A and B, σA and σB are the respective standard deviation of A and B.
• Positive covariance: If CovA,B > 0, then A and B both tend to be larger than their
expected values.
• Negative covariance: If CovA,B < 0 then if A is larger than its expected value, B is likely
to be smaller than its expected value.
• Independence: CovA,B = 0 but the converse is not true:
• Some pairs of random variables may have a covariance of 0 but are not independent. Only
under some additional assumptions (e.g., the data follow multivariate normal
distributions) does a covariance of 0 imply independence
28
Co-Variance: An Example
• Suppose two stocks A and B have the following values in one week: (2, 5), (3, 8),
(5, 10), (4, 11), (6, 14).
• Question: If the stocks are affected by the same industry trends, will their prices
rise or fall together?
• E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4
• E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6
• Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4
• Data reduction: Obtain a reduced representation of the data set that is much
smaller in volume but yet produces the same (or almost the same) analytical
results
• Why data reduction? — A database/data warehouse may store terabytes of
data. Complex data analysis may take a very long time to run on the complete
data set.
• Data reduction strategies
• Dimensionality reduction, e.g., remove unimportant attributes
• Wavelet transforms
• Principal Components Analysis (PCA)
• Feature subset selection, feature creation
• Numerosity reduction (some simply call it: Data Reduction)
• Regression and Log-Linear Models
• Histograms, clustering, sampling
• Data cube aggregation
30
• Data compression
Data Reduction 1: Dimensionality Reduction
• Curse of dimensionality
• When dimensionality increases, data becomes increasingly sparse
• Density and distance between points, which is critical to clustering, outlier
analysis, becomes less meaningful
• The possible combinations of subspaces will grow exponentially
• Dimensionality reduction
• Avoid the curse of dimensionality
• Help eliminate irrelevant features and reduce noise
• Reduce time and space required in data mining
• Allow easier visualization
• Dimensionality reduction techniques
• Wavelet transforms
• Principal Component Analysis
• Supervised and nonlinear techniques (e.g., feature selection)
31
Mapping Data to a New Space
Fourier transform
Wavelet transform
32
What Is Wavelet Transform?
• Decomposes a signal into
different frequency subbands
• Applicable to n-dimensional
signals
• Data are transformed to preserve
relative distance between objects
at different levels of resolution
• Allow natural clusters to become
more distinguishable
• Used for image compression
33
Wavelet Transformation
Haar2 Daubechie4
• Discrete wavelet transform (DWT) for linear signal processing,
multi-resolution analysis
• Compressed approximation: store only a small fraction of the
strongest of the wavelet coefficients
• Similar to discrete Fourier transform (DFT), but better lossy
compression, localized in space
• Method:
• Length, L, must be an integer power of 2 (padding with 0’s, when
necessary)
• Each transform has 2 functions: smoothing, difference
• Applies to pairs of data, resulting in two set of data of length L/2
• Applies two functions recursively, until reaches the desired length
34
Wavelet Decomposition
• Wavelets: A math tool for space-efficient hierarchical decomposition of functions
• S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^ = [23/4, -11/4, 1/2, 0, 0, -1, -1, 0]
• Compression: many small detail coefficients can be replaced by 0’s, and only the
significant coefficients are retained
35
Haar Wavelet Coefficients Coefficient “Supports”
Hierarchical 2.75 +
2.75
decomposition
structure (a.k.a. + -1.25 + -
“error tree”) + -1.25
-
0.5 0
0.5 + -
+ - + - 0 + -
0 -1 -1 0
+ - + - + - + -
0 + -
2 2 0 2 3 5 4 4 -1 + -
-1 + -
Original frequency distribution 0 + -
36
Why Wavelet Transform?
• Use hat-shape filters
• Emphasize region where points cluster
• Suppress weaker information in their boundaries
• Effective removal of outliers
• Insensitive to noise, insensitive to input order
• Multi-resolution
• Detect arbitrary shaped clusters at different scales
• Efficient
• Complexity O(N)
• Only applicable to low dimensional data
37
Principal Component Analysis (PCA)
x2
x1
38
Principal Component Analysis (Steps)
• Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors
(principal components) that can be best used to represent data
• Normalize input data: Each attribute falls within the same range
• Compute k orthonormal (unit) vectors, i.e., principal components
• Each input data (vector) is a linear combination of the k principal
component vectors
• The principal components are sorted in order of decreasing “significance”
or strength
• Since the components are sorted, the size of the data can be reduced by
eliminating the weak components, i.e., those with low variance (i.e., using
the strongest principal components, it is possible to reconstruct a good
approximation of the original data)
• Works for numeric data only
39
Attribute Subset Selection
• Another way to reduce dimensionality of data
• Redundant attributes
• Duplicate much or all of the information contained in one or more other
attributes
• E.g., purchase price of a product and the amount of sales tax paid
• Irrelevant attributes
• Contain no information that is useful for the data mining task at hand
• E.g., students' ID is often irrelevant to the task of predicting students' GPA
40
Heuristic Search in Attribute Selection
• There are 2d possible attribute combinations of d attributes
• Typical heuristic attribute selection methods:
• Best single attribute under the attribute independence
assumption: choose by significance tests
• Best step-wise feature selection:
• The best single-attribute is picked first
• Then next best attribute condition to the first, ...
• Step-wise attribute elimination:
• Repeatedly eliminate the worst attribute
• Best combined attribute selection and elimination
• Optimal branch and bound:
• Use attribute elimination and backtracking
41
Attribute Creation (Feature Generation)
• Create new attributes (features) that can capture the important information in a
data set more effectively than the original ones
• Three general methodologies
• Attribute extraction
• Domain-specific
• Mapping data to new space (see: data reduction)
• E.g., Fourier transformation, wavelet transformation, manifold approaches (not covered)
• Attribute construction
• Combining features (see: discriminative frequent patterns in Chapter 7)
• Data discretization
42
Data Reduction 2: Numerosity Reduction
• Reduce data volume by choosing alternative, smaller forms of
data representation
• Parametric methods (e.g., regression)
• Assume the data fits some model, estimate model
parameters, store only the parameters, and discard the
data (except possible outliers)
• Ex.: Log-linear models—obtain value at a point in m-D
space as the product on appropriate marginal subspaces
• Non-parametric methods
• Do not assume models
• Major families: histograms, clustering, sampling, …
43
Parametric Data Reduction: Regression and
Log-Linear Models
• Linear regression
• Data modeled to fit a straight line
• Often uses the least-square method to fit the line
• Multiple regression
• Allows a response variable Y to be modeled as a linear
function of multidimensional feature vector
• Log-linear model
• Approximates discrete multidimensional probability
distributions
44
y
Regression Analysis
Y1
• Partitioning rules: 25
20
• Equal-width: equal bucket
15
range 10
47
Clustering
• Partition data set into clusters based on similarity, and store
cluster representation (e.g., centroid and diameter) only
• Can be very effective if data is clustered but not if data is
“smeared”
• Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
• There are many choices of clustering definitions and clustering
algorithms
• Cluster analysis will be studied in depth in Chapter 10
48
Sampling
• Sampling: obtaining a small sample s to represent the whole
data set N
• Allow a mining algorithm to run in complexity that is potentially
sub-linear to the size of the data
• Key principle: Choose a representative subset of the data
• Simple random sampling may have very poor performance in
the presence of skew
• Develop adaptive sampling methods, e.g., stratified
sampling:
• Note: Sampling may not reduce database I/Os (page at a time)
49
Types of Sampling
• Simple random sampling
• There is an equal probability of selecting any particular item
• Sampling without replacement
• Once an object is selected, it is removed from the population
• Sampling with replacement
• A selected object is not removed from the population
• Stratified sampling:
• Partition the data set, and draw samples from each partition
(proportionally, i.e., approximately the same percentage of
the data)
• Used in conjunction with skewed data
50
Sampling: With or without Replacement
Raw Data
51
Sampling: Cluster or Stratified Sampling
52
Data Cube Aggregation
54
Data Compression
Original Data
Approximated
55
Data Transformation
• A function that maps the entire set of values of a given attribute to a new
set of replacement values s.t. each old value can be identified with one of
the new values
• Methods
• Smoothing: Remove noise from data
• Attribute/feature construction
• New attributes constructed from the given ones
• Aggregation: Summarization, data cube construction
• Normalization: Scaled to fall within a smaller, specified range
• min-max normalization
• z-score normalization
• normalization by decimal scaling
• Discretization: Concept hierarchy climbing
56
Normalization
• Min-max normalization: to [new_minA, new_maxA]
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
• Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0]. Then
73,600 12,000
(1.0 0) 0 0.716
$73,000 is mapped to 98,000 12,000
73,600 54,000
1.225
• Ex. Let μ = 54,000, σ = 16,000. Then 16,000
58
Data Discretization Methods
• Typical methods: All the methods can be applied recursively
• Binning
• Top-down split, unsupervised
• Histogram analysis
• Top-down split, unsupervised
• Clustering analysis (unsupervised, top-down split or bottom-
up merge)
• Decision-tree analysis (supervised, top-down split)
• Correlation (e.g., 2) analysis (unsupervised, bottom-up
merge)
59
Simple Discretization: Binning
60
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
61
Discretization Without Using Class Labels
(Binning vs. Clustering)
62
Discretization by Classification &
Correlation Analysis
• Classification (e.g., decision tree analysis)
• Supervised: Given class labels, e.g., cancerous vs. benign
• Using entropy to determine split point (discretization point)
63
Concept Hierarchy Generation
• Concept hierarchy organizes concepts (i.e., attribute values) hierarchically and
is usually associated with each dimension in a data warehouse
• Concept hierarchies facilitate drilling and rolling in data warehouses to view
data in multiple granularity
• Concept hierarchy formation: Recursively reduce the data by collecting and
replacing low level concepts (such as numeric values for age) by higher level
concepts (such as youth, adult, or senior)
• Concept hierarchies can be explicitly specified by domain experts and/or data
warehouse designers
• Concept hierarchy can be automatically formed for both numeric and nominal
data. For numeric data, use discretization methods shown.
64
Concept Hierarchy Generation
for Nominal Data
• Specification of a partial/total ordering of attributes explicitly at
the schema level by users or experts
• street < city < state < country
• Specification of a hierarchy for a set of values by explicit data
grouping
• {Urbana, Champaign, Chicago} < Illinois
• Specification of only a partial set of attributes
• E.g., only street < city, not others
• Automatic generation of hierarchies (or attribute levels) by the
analysis of the number of distinct values
• E.g., for a set of attributes: {street, city, state, country}
65
Automatic Concept Hierarchy Generation