Data Mining:
Concepts and
Techniques
Classification and Prediction
1
What is classification?
What is prediction?
2
Classification vs. Prediction
Classification:
predicts categorical class labels
classifies data (constructs a model) based on
the training set and the values (class labels) in
a classifying attribute and uses it in classifying
new data
Prediction:
models continuous-valued functions, i.e.,
predicts unknown or missing values
Typical Applications
credit approval
target marketing
medical diagnosis
3
Classification—A Two-Step
Process
Model construction: describing a set of predetermined
classes
Each tuple/sample is assumed to belong to a predefined
class, as determined by the class label attribute
The set of tuples used for model construction: training
set
The model is represented as classification rules,
decision trees, or mathematical formulae
Model usage: for classifying future or unknown objects
Estimate accuracy of the model
The known label of test sample is compared with the
classified result from the model
Accuracy rate is the percentage of test set samples
that are correctly classified by the model
Test set is independent of training set, otherwise
4
Classification Process (1):
Model Construction
Classification
Algorithms
Training
Data
NAM E RANK YEARS TENURED Classifier
M ike Assistant Prof 3 no (Model)
M ary Assistant Prof 7 yes
Bill Professor 2 yes
Jim Associate Prof 7 yes
IF rank = ‘professor’
Dave Assistant Prof 6 no
OR years > 6
Anne Associate Prof 3 no
THEN tenured = ‘yes’
5
Classification Process (2): Use
the Model in Prediction
Classifier
Testing
Data Unseen Data
(Jeff, Professor, 4)
NAM E RANK YEARS TENURED
Tom Assistant Prof 2 no Tenured?
M erlisa Associate Prof 7 no
G eorge Professor 5 yes
Joseph Assistant Prof 7 yes
6
Supervised vs. Unsupervised
Learning
Supervised learning (classification)
Supervision: The training data (observations,
measurements, etc.) are accompanied by
labels indicating the class of the observations
New data is classified based on the training
set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations,
etc. with the aim of establishing the existence
of classes or clusters in the data 7
Issues regarding classification and
prediction
8
Issues regarding classification and
prediction (1): Data Preparation
Data cleaning
Preprocess data in order to reduce noise and
handle missing values
Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes
Data transformation
Generalize and/or normalize data
9
Issues regarding classification and
prediction (2): Evaluating Classification
Methods
Predictive accuracy (correctly predict the class
of new data)
Speed and scalability
time to construct the model
time to use the model
Robustness
handling noise and missing values
Scalability
efficiency in disk-resident databases
Interpretability:
understanding and insight provided by the
model
Goodness of rules
decision tree size
10
Classification by decision tree induction
11
Classification by Decision Tree
Induction
Decision tree
A flow-chart-like tree structure
Internal node denotes a test on an attribute
Branch represents an outcome of the test
Leaf nodes represent class labels or class distribution
Decision tree generation consists of two phases
Tree construction
At start, all the training examples are at the root
Partition examples recursively based on selected
attributes
Tree pruning
Identify and remove branches that reflect noise or
outliers
Use of decision tree: Classifying an unknown sample
Test the attribute values of the sample against the 12
Training Dataset
age income student credit_rating
<=30 high no fair
<=30 high no excellent
31…40 high no fair
>40 medium no fair
>40 low yes fair
>40 low yes excellent
31…40 low yes excellent
<=30 medium no fair
<=30 low yes fair
>40 medium yes fair
<=30 medium yes excellent
31…40 medium no excellent
31…40 high yes fair
>40 medium no excellent
13
Output: A Decision Tree for “buys_computer”
age?
<=30 overcast
30..40 >40
student? yes credit rating?
no yes excellent fair
no yes no yes
14
Algorithm for Decision Tree
Induction
Basic algorithm (a greedy algorithm)
Tree is constructed in a top-down recursive divide-and-
conquer manner
At start, all the training examples are at the root
Attributes are categorical (if continuous-valued, they are
discretized in advance)
Examples are partitioned recursively based on selected
attributes
Test attributes are selected on the basis of a heuristic or
statistical measure (e.g., information gain)
Conditions for stopping partitioning
All samples for a given node belong to the same class
There are no remaining attributes for further partitioning
– majority voting is employed for classifying the leaf
There are no samples left
15
Attribute Selection
Measure
Information gain
All attributes are assumed to be categorical
Can be modified for continuous-valued attributes
16
Information Gain
Select the attribute with the highest information
gain
Assume there are two classes, P and N
Let the set of examples S contain p elements of
class P and n elements of class N
The amount of information, needed to decide if an
arbitrary example in S belongs to P or N is
p p n n
defined )=−
I ( p, nas log 2 − log 2
p+n p+n p+n p+n
17
Information Gain in Decision
Tree Induction
Assume that using attribute A a set S will be
partitioned into sets {S1, S2 , …, Sv}
If Si contains pi examples of P and ni examples
of N, the entropy, or the expected information
needed to classify objects in all subtrees Si is
ν p +n
E ( A) = ∑ i i I ( pi , ni )
i =1 p + n
The encoding information that would be gained
by branching on A ( A) = I ( p, n) − E ( A)
Gain
18
Attribute Selection by
Information Gain Computation
5 4
E ( age) = I ( 2,3) + I ( 4,0)
Class P: buys_computer = 14 14
“yes” 5
+ I (3,2) =0.69
14
Class N: buys_computer =
“no” Hence
I(p, n) = I(9, 5) =0.940 Gain(age) = I ( p, n) − E (age)
Compute the entropy for
Similarly
age:
age pi ni I(pi, ni) Gain(income) = 0.029
<=30 2 3 0.971
Gain( student ) = 0.151
30…40 4 0 0
>40 3 2 0.971 Gain(credit _ rating ) = 0.048
19
Extracting Classification Rules from
Trees
Represent the knowledge in the form of IF-THEN
rules
One rule is created for each path from the root to a
leaf
Each attribute-value pair along a path forms a
conjunction
The leaf node holds the class prediction
Rules are easier for humans to understand
Example
IF age = “<=30” AND student = “no” THEN buys_computer
= “no”
IF age = “<=30” AND student = “yes” THEN buys_computer
= “yes”
IF age = “31…40” THEN buys_computer
= “yes” 20
Avoid Overfitting in
Classification
The generated tree may overfit the training data
Too many branches, some may reflect
anomalies due to noise or outliers
Result is in poor accuracy for unseen samples
Two approaches to avoid overfitting
Prepruning: Halt tree construction early—do
not split a node if this would result in the
goodness measure falling below a threshold
Difficult to choose an appropriate threshold
Postpruning: Remove branches from a “fully
grown” tree—get a sequence of progressively
pruned trees
Use a set of data different from the training
data to decide which is the “best pruned 21
Approaches to Determine the
Final Tree Size
Separate training (2/3) and testing (1/3)
sets
Use cross validation, e.g., 10-fold cross
validation
Use all the data for training
but apply a statistical test (e.g., chi-
square) to estimate whether expanding
or pruning a node may improve the
entire distribution
Use minimum description length (MDL) 22
Enhancements to basic
decision tree induction
Allow for continuous-valued attributes
Dynamically define new discrete-valued
attributes that partition the continuous attribute
value into a discrete set of intervals
Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
Attribute construction
Create new attributes based on existing ones
that are sparsely represented
This reduces fragmentation, repetition, and
replication
23
Classification in Large Databases
Classification—a classical problem extensively
studied by statisticians and machine learning
researchers
Scalability: Classifying data sets with millions of
examples and hundreds of attributes with
reasonable speed
Why decision tree induction in data mining?
relatively faster learning speed (than other
classification methods)
convertible to simple and easy to understand
classification rules
can use SQL queries for accessing databases
24
Scalable Decision Tree
Induction Methods in Data
Mining Studies
SLIQ
builds an index for each attribute and only class
list and the current attribute list reside in
memory
SPRINT
constructs an attribute list data structure
PUBLIC
integrates tree splitting and tree pruning: stop
growing the tree earlier
RainForest
separates the scalability aspects from the criteria
that determine the quality of the tree
25
Data Cube-Based Decision-
Tree Induction
Integration of generalization with decision-tree
induction
Classification at primitive concept levels
E.g., precise temperature, humidity, outlook, etc.
Low-level concepts, scattered classes, bushy
classification-trees
Semantic interpretation problems.
Cube-based multi-level classification
Relevance analysis at multi-levels.
Information-gain analysis with dimension + level.
26
Presentation of Classification
Results
27
Bayesian Classification
28
Bayesian Classification: Why?
Probabilistic learning: Calculate explicit probabilities
for hypothesis, among the most practical
approaches to certain types of learning problems
Incremental: Each training example can
incrementally increase/decrease the probability that
a hypothesis is correct. Prior knowledge can be
combined with observed data.
Probabilistic prediction: Predict multiple hypotheses,
weighted by their probabilities
Standard: Even when Bayesian methods are
computationally intractable, they can provide a
standard of optimal decision making against which
other methods can be measured
29
Bayesian
Theorem
Given training data D, posteriori probability of a
hypothesis h, P(h|D) follows the Bayes theorem
P(h | D) = P(D | h)P(h)
P(D)
Practical difficulty: require initial knowledge of
many probabilities, significant computational
cost
30
Naive Bayesian Classifier (simple
Bayesian classifier) works as
Each data sample is represented by an n-
dimensional feature vector, X=(x1, x2…,xn),
depicting n measurements on the sample from
n attributes, respectively, A1,A2,…,An.
Suppose there are m classes, C1,C2,…Cm. X is
data sample (ie having no class label), the
classifier will predict that X belongs to the class
having the highest posterior probability,
conditioned on X. ie the naïve
Bayesian classifier assigns an unknown sample
X to class Ci iff
P(Ci/X) = P(X/Ci)P(Ci) / P(X)
31
Naive Bayesian Classifier works as
P(X) is constant for all classes, only P(X/Ci)P(Ci) need to
be maximized. Class prior probabilities may be estimated
by P(Ci)= si/s , where si is the no. of training samples of class
Ci, and s is the total no. of training samples.
If we have data sets with many attributes, it would be
computationally expensive to compute P(X/Ci). To reduce
computation in evaluating P(X/Ci), the naïve assumption
of class conditional independence is made. This
presumes that the values of the attributes are
conditionally independent of one another, given the class
label of the sample ie there are no dependence
relationships among the attributes. Thus,
n
P(X/Ci)=π P(xk/ Ci)
k=1
32
Naive Bayesian Classifier works as
the probabilities P(x1/Ci), P(x2/Ci), …, P(xn/Ci) can be
estimated from training samples, where
- If Ak is categorical, then P(xk/Ci)= sik/si,
where sik is the no. of training samples of class Ci
having the value xk for Ak and si is the no. of training
samples belonging to Ci.
- If Ak is continuous- valued, then the attribute is
assumed to have a Gaussian distribution so that
P(xk/Ci) = g(xk,µCi ,σ Ci) =
where g(xk,µCi ,σ Ci) is the Gaussian (normal) density
function for attribute Ak, while µCi and σ Ci are the mean
and standard deviation respectively, given the values
for attribute for Ak training samples of class Ci.
33
Naive Bayesian Classifier works as
In order to classify an unknown sample X,
P(X/Ci)P(Ci) is evaluated for each class Ci. Sample
X is then assigned to the class Ci iff
P(X/Ci)P(Ci) > P(X/Cj)P(Cj)
for 1< j <m, j is not equal i
we can say, it is assigned to the class Ci for
which P(X/Ci)P(Ci) is the maximum.
34
Bayesian classification
The classification problem may be
formalized using a-posteriori probabilities:
P(C|X) = prob. that the sample tuple
X=<x1,…,xk> is of class C.
E.g. P(class=N |
outlook=sunny,windy=true,…)
Idea: assign to sample X the class label C
such that P(C|X) is maximal
35
Estimating a-posteriori
probabilities
Bayes theorem:
P(C|X) = P(X|C)·P(C) / P(X)
P(X) is constant for all classes
P(C) = relative freq of class C samples
C such that P(C|X) is maximum =
C such that P(X|C)·P(C) is maximum
Problem: computing P(X|C) is unfeasible!
36
The independence hypothesis…
… makes computation possible
… yields optimal classifiers when satisfied
… but is seldom satisfied in practice, as
attributes (variables) are often correlated.
Attempts to overcome this limitation:
Bayesian networks, that combine Bayesian
reasoning with causal relationships between
attributes
Decision trees, that reason on one attribute at
the time, considering most important
37
Bayesian Belief Networks
Family
Smoker
History
(FH, S) (FH, ~S)(~FH, S) (~FH, ~S)
LC 0.8 0.5 0.7 0.1
LungCancer Emphysema ~LC 0.2 0.5 0.3 0.9
The conditional probability table
for the variable LungCancer
PositiveXRay Dyspnea
Bayesian Belief Networks
38
Bayesian Belief Networks
Bayesian belief network allows a subset of the
variables conditionally independent
A graphical model of causal relationships
Several cases of learning Bayesian belief networks
Given both network structure and all the
variables: easy
Given network structure but only some variables
When the network structure is not known in
advance
39
Classification by backpropagation
40
Neural Networks
Advantages
prediction accuracy is generally high
robust, works when training examples contain
errors
output may be discrete, real-valued, or a
vector of several discrete or real-valued
attributes
fast evaluation of the learned target function
Criticism
long training time
difficult to understand the learned function
(weights)
41
A Neuron
- µk
x0 w0
x1 w1
∑ f
output y
xn wn
Input weight weighted Activation
vector x vector w sum function
The n-dimensional input vector x is mapped
into variable y by means of the scalar
product and a nonlinear function mapping
42
Network Training
The ultimate objective of training
obtain a set of weights that makes almost all
the tuples in the training data classified
correctly
Steps
Initialize weights with random values
Feed the input tuples into the network one by
one
For each unit
Compute the net input to the unit as a linear
combination of all the inputs to the unit
Compute the output value using the activation
43
Multi-Layer Perceptron
Output vector
Err j = O j (1 − O j )∑ Errk w jk
Output nodes k
θ j = θ j + (l) Err j
wij = wij + (l ) Err j Oi
Hidden nodes Err j = O j (1 − O j )(T j − O j )
wij 1
Oj = −I j
1+ e
Input nodes
I j = ∑ wij Oi + θ j
i
Input vector: xi
44
Classification based on concepts from
association rule mining
45
Association-Based Classification
Several methods for association-based
classification
ARCS: Quantitative association mining and
clustering of association rules
It beats C4.5 in (mainly) scalability and also accuracy
Associative classification:
It mines high support and high confidence rules in the
form of “cond_set => y”, where y is a class label
CAEP (Classification by aggregating emerging
patterns)
Emerging patterns (EPs): the itemsets whose support
increases significantly from one class to another
Mine Eps based on minimum support and growth rate
46
Other Classification Methods
47
Other Classification Methods
k-nearest neighbor classifier
case-based reasoning
Genetic algorithm
Rough set approach
Fuzzy set approaches
48
Instance-Based Methods
Instance-based learning:
Store training examples and delay the
processing (“lazy evaluation”) until a new
instance must be classified
Typical approaches
k-nearest neighbor approach
Instances represented as points in a
Euclidean space.
Locally weighted regression
Constructs local approximation
Case-based reasoning
Uses symbolic representations and
knowledge-based inference
49
The k-Nearest Neighbor
Algorithm
All instances correspond to points in the n-D
space.
The nearest neighbor are defined in terms of
Euclidean distance.
The target function could be discrete- or real-
valued.
For discrete-valued, the k-NN returns the most
common value among the k training examples
nearest to xq.
Vonoroi diagram: the decision surface induced
_
.
by 1-NN for_ a _typical set of training examples.
_
+
_ .
+
xq + . . .
_
+ . 50
Discussion on the k-NN
Algorithm
The k-NN algorithm for continuous-valued target
functions
Calculate the mean values of the k nearest
neighbors
Distance-weighted nearest neighbor algorithm
Weight the contribution of each of the k 1
w≡
d ( xq , xi )2
neighbors according to their distance to the
query point xq
giving greater weight to closer neighbors
Similarly, for real-valued target functions
Robust to noisy data by averaging k-nearest
neighbors
Curse of dimensionality: distance between 51
Case-Based Reasoning
Also uses: lazy evaluation + analyze similar
instances
Difference: Instances are not “points in a Euclidean
space”
Example: Water faucet problem in CADET
Methodology
Instances represented by rich symbolic
descriptions (e.g., function graphs)
Multiple retrieved cases may be combined
Tight coupling between case retrieval,
knowledge-based reasoning, and problem solving
Research issues
Indexing based on syntactic similarity measure,
and when failure, backtracking, and adapting to 52
Remarks on Lazy vs. Eager
Learning
Instance-based learning: lazy evaluation
Decision-tree and Bayesian classification: eager evaluation
Key differences
Lazy method may consider query instance xq when
deciding how to generalize beyond the training data D
Eager method cannot since they have already chosen
global approximation when seeing the query
Efficiency: Lazy - less time training but more time predicting
Accuracy
Lazy method effectively uses a richer hypothesis space
since it uses many local linear functions to form its implicit
global approximation to the target function
Eager: must commit to a single hypothesis that covers the
entire instance space
53
Genetic Algorithms
GA: based on an analogy to biological evolution
Each rule is represented by a string of bits
An initial population is created consisting of
randomly generated rules
e.g., IF A and Not A then C can be encoded as
1 2 2
100
Based on the notion of survival of the fittest, a new
population is formed to consists of the fittest rules
and their offsprings
The fitness of a rule is represented by its
classification accuracy on a set of training
examples
Offsprings are generated by crossover and 54
Rough Set Approach
Rough sets are used to approximately or
“roughly” define equivalent classes
A rough set for a given class C is approximated
by two sets: a lower approximation (certain to be
in C) and an upper approximation (cannot be
described as not belonging to C)
Finding the minimal subsets (reducts) of
attributes (for feature reduction) is NP-hard but a
discernibility matrix is used to reduce the
computation intensity
55
Fuzzy Set
Approaches
Fuzzy logic uses truth values between 0.0 and 1.0
to represent the degree of membership (such as
using fuzzy membership graph)
Attribute values are converted to fuzzy values
e.g., income is mapped into the discrete
categories {low, medium, high} with fuzzy
values calculated
For a given new sample, more than one fuzzy
value may apply
Each applicable rule contributes a vote for
membership in the categories
Typically, the truth values for each predicted
category are summed 56
Prediction
57
What Is
Prediction?
Prediction is similar to classification
First, construct a model
Second, use model to predict unknown value
Major method for prediction is regression
Linear and multiple regression
Non-linear regression
Prediction is different from classification
Classification refers to predict categorical
class label
Prediction models continuous-valued functions
58
Predictive Modeling in
Databases
Predictive modeling: Predict data values or
construct generalized linear models based on the
database data.
One can only predict value ranges or category
distributions
Method outline:
Minimal generalization
Attribute relevance analysis
Generalized linear model construction
Prediction
Determine the major factors which influence the
prediction
Data relevance analysis: uncertainty
59
Regress Analysis and Log-Linear
Models in Prediction
Linear regression: Y = α + β X
Two parameters , α and β specify the line and
are to be estimated by using the data at hand.
using the least squares criterion to the known
values of Y1, Y2, …, X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2.
Many nonlinear functions can be transformed
into the above.
Log-linear models:
The multi-way table of joint probabilities is
approximated by a product of lower-order
tables.
Probability: p(a, b, c, d) = αab βacχad δbcd
60
Locally Weighted Regression
Construct an explicit approximation to f over a
local region surrounding query instance xq.
Locally weighted linear regression:
The target function f is approximated near xq
( x) = w + w a ( x)++w a ( x)
f
using the linear function:0 1 1 n n
minimize the squared error: distance-decreasing
weightEK( x ) ≡ 1 ∑ ( f ( x) − f ( x))2 K (d ( x , x))
q 2 x∈k _nearest _neighbors_of _ x q
q
the gradient descent training rule:
∆w j ≡ η ∑ K (d ( xq , x))(( f ( x) − f ( x))a j ( x)
x ∈k _ nearest _ neighbors_ of _ xq
In most cases, the target function is approximated
by a constant, linear, or quadratic function.
61
Prediction: Numerical Data
62
Prediction: Categorical Data
63