CLUSTER ANALYSIS
Clustering approaches
Partitioning Algorithms: Basic Concept
Partitioning method: Construct a partition of a database D of
n objects into a set of k clusters,
Given a k, find a partition of k clusters that optimizes the
chosen partitioning criterion
• Global optimal: exhaustively enumerate all partitions
• Heuristic methods: k-means and k-medoids algorithms
• k-means (MacQueen’67): Each cluster is represented by the
center of the cluster
• k-medoids or PAM (Partition around medoids) (Kaufman
& Rousseeuw’87): Each cluster is represented by one of the
objects in the cluster
The K-Means Clustering Method
Given k, the k-means algorithm is implemented in four steps:
• Partition objects into k nonempty subsets
• Compute seed points as the centroids of the clusters of the
current partition (the centroid is the center, i.e., mean point,
of the cluster)
• Assign each object to the cluster with the nearest seed point
• Go back to Step 2, stop when no more new assignment
The K-Means Clustering Method
• Example:
Comments on the K-Means Method
Strength: Relatively efficient: O(tkn), where n is no of
objects, k is no of clusters, and t is no of iterations. Normally,
k, t << n.
• Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k))
Comment: Often terminates at a local optimum. The global
optimum may be found using techniques such as: deterministic
annealing and genetic algorithms
Weakness
• Applicable only when mean is defined, then what about
categorical data?
• Need to specify k, the number of clusters, in advance
• Unable to handle noisy data and outliers
• Not suitable to discover clusters with non-convex shapes
Variations of the K-Means Method
A few variants of the k-means which differ in
• Selection of the initial k means
• Dissimilarity calculations
• Strategies to calculate cluster means
Handling categorical data: k-modes (Huang’98)
• Replacing means of clusters with modes
• Using new dissimilarity measures to deal with categorical
objects
• Using a frequency-based method to update modes of
clusters
• A mixture of categorical and numerical data: k-prototype
method
What Is the Problem of the K-Means
Method?
• The k-means algorithm is sensitive to outliers !
– Since an object with an extremely large value may
substantially distort the distribution of the data.
• K-Medoids:Instead of taking the mean value of the object in a
cluster as a reference point, medoids can be used, which is the
most centrally located object in a cluster.
The K-Medoids Clustering Method
Find representative objects, called medoids, in clusters
PAM (Partitioning Around Medoids, 1987)
• starts from an initial set of medoids and iteratively replaces
one of the medoids by one of the non-medoids if it improves
the total distance of the resulting clustering
• PAM works effectively for small data sets, but does not
scale well for large data sets
CLARA (Kaufmann & Rousseeuw, 1990)
CLARANS (Ng & Han, 1994): Randomized sampling
Focusing + spatial data structure (Ester et al., 1995)
A Typical K-Medoids Algorithm
(PAM)
Total Cost = 20
10 10 10
9 9 9
8 8 8
Arbitrary Assign
7 7 7
6 6 6
5
choose k 5
each 5
4 object as 4 remainin 4
3 initial 3 g object 3
2
medoids 2
to 2
nearest
1 1 1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 medoids 0 1 2 3 4 5 6 7 8 9 10
K=2 Randomly select a
Total Cost = 26 nonmedoid object,Oramdom
10 10
Do loop 9
Compute
9
Swapping O
8 8
7 total cost of 7
Until no and Oramdom 6
swapping 6
5 5
change If quality is 4 4
improved. 3
2
3
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
PAM (Partitioning Around Medoids)
(1987)
PAM (Kaufman and Rousseeuw, 1987), built in Splus
Use real object to represent the cluster
Select k representative objects arbitrarily
For each pair of non-selected object h and selected object i ,
calculate the total swapping cost T Cih
For each pair of i and h ,
• If T Cih < 0, i is replaced by h
• Then assign each non-selected object to the most similar
representative object
repeat steps 2-3 until there is no change
What Is the Problem with PAM?
Pam is more robust than k-means in the presence of noise and
outliers because a medoid is less influenced by outliers or
other extreme values than a mean
Pam works efficiently for small data sets but does not scale
well for large data sets.
– O(k(n-k)2 ) for each iteration
where n is no of data, k is no of clusters
• Sampling based method,
CLARA(Clustering LARge Applications)
CLARA (Clustering Large Applications)
CLARA (Kaufmann and Rousseeuw in 1990)
• Built in statistical analysis packages, such as S+
It draws multiple samples of the data set, applies PAM on each
sample, and gives the best clustering as the output
Strength: deals with larger data sets than PAM
Weakness:
• Efficiency depends on the sample size
• A good clustering based on samples will not necessarily
represent a good clustering of the whole data set if the
sample is biased
CLARANS (“Randomized” CLARA)
(1994)
CLARANS (A Clustering Algorithm based on Randomized
Search) (Ng and Han’94)
CLARANS draws sample of neighbors dynamically
The clustering process can be presented as searching a graph
where every node is a potential solution, that is, a set of k
medoids
If the local optimum is found, CLARANS starts with new
randomly selected node in search for a new local optimum
It is more efficient and scalable than both PAM and CLARA
Focusing techniques and spatial access structures may Further
improve its performance.
Hierarchical Clustering
• Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input, but
needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
agglomerative
(AGNES)
a
ab
b abcde
c
cde
d
de
e
divisive
Step 4 Step 3 Step 2 Step 1 Step 0 (DIANA)
AGNES (Agglomerative Nesting)
• Implemented in statistical analysis packages, e.g., Splus
• Use the Single-Link method and the dissimilarity matrix.
• Merge nodes that have the least dissimilarity
• Go on in a non-descending fashion
• Eventually all nodes belong to the same cluster
10 10 10
9 9 9
8 8 8
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Dendrogram: Shows How the Clusters
are Merged
Decompose data objects into a several levels of nested
partitioning (tree of clusters), called a dendrogram.
A clustering of the data objects is obtained by cutting the
dendrogram at the desired level, then each connected
component forms a cluster.
DIANA (Divisive Analysis)
• Introduced in Kaufmann and Rousseeuw (1990)
• Implemented in statistical analysis packages, e.g., Splus
• Inverse order of AGNES
• Eventually each node forms a cluster on its own
10 10
10
9 9
9
8 8
8
7 7
7
6 6
6
5 5
5
4 4
4
3 3
3
2 2
2
1 1
1
0 0
0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
Recent Hierarchical Clustering Methods
Major weakness of agglomerative clustering methods
• do not scale well: time complexity of at least O(n2),
where n is the number of total objects
• can never undo what was done previously
Integration of hierarchical with distance-based clustering
• BIRCH (1996): uses CF-tree and incrementally adjusts
the quality of sub-clusters
• ROCK (1999): clustering categorical data by neighbor
and link analysis
• CHAMELEON (1999): hierarchical clustering using
dynamic modeling
BIRCH (1996)
Birch: Balanced Iterative Reducing and Clustering using
Hierarchies (Zhang, Ramakrishnan & Livny, SIGMOD’96)
Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
• Phase 1: scan DB to build an initial in-memory CF tree (a multi-
level compression of the data that tries to preserve the inherent
clustering structure of the data)
• Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes
of the CF-tree
Scales linearly: finds a good clustering with a single scan
and improves the quality with a few additional scans
Weakness: handles only numeric data, and sensitive to the
order of the data record
Clustering Feature Vector inBIRCH
Clustering Feature: CF = (N, LS, SS)
N: Number of data points in the subcluster
LS: N =X - linear sum of data points
i=1 i
SS: N =X 2 - square sum of data points
i=1 i CF = (5, (16,30),(54,190))
10
9
(3,4)
8
6
(2,6)
5
4 (4,5)
3
1
(4,7)
0
0 1 2 3 4 5 6 7 8 9 10
(3,8)
CF-Tree in BIRCH
Clustering feature:
• summary of the statistics for a given subcluster: the 0-th, 1st and 2nd
moments of the subcluster from the statistical point of view.
• registers crucial measurements for computing cluster and utilizes
storage efficiently
A CF tree is a height-balanced tree that stores the clustering features for
a hierarchical clustering
• A nonleaf node in a tree has descendants or “children”
• The nonleaf nodes store sums of the CFs of their children
A CF tree has two parameters
• Branching factor: specify the maximum number of children.
• Threshold: max diameter of sub-clusters stored at the leaf nodes
The CF Tree Structure
Root
B=7 CF1 CF2 CF3 CF6
child1 child2 child3 child6
L=6
Non-leaf node
CF1 CF2 CF3 CF5
child1 child2 child3 child5
Leaf node Leaf node
prev CF1 CF2 CF6 next prev CF1 CF2 CF4 next
Clustering Categorical Data: The
ROCK Algorithm
ROCK: RObust Clustering using linKs
Major ideas
Use links to measure similarity/proximity
Not distance-based
Computational complexity:
Algorithm: sampling-based clustering
Draw random sample
Cluster with links
Label data in disk
Similarity Measure in ROCK
Traditional measures for categorical data may not work well, e.g.,
Jaccard coefficient
Example: Two groups (clusters) of transactions
• C1. <a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e},
{a, d, e}, {b, c, d}, {b, c, e}, {b, d, e}, {c, d, e}
• C2. <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Jaccard co-efficient may lead to wrong clustering result
• C1: 0.2 ({a, b, c}, {b, d, e}} to 0.5 ({a, b, c}, {a, b, d})
• C1 & C2: could be as high as 0.5 ({a, b, c}, {a, b, f})
Jaccard co-efficient-based similarity function:
Link Measure in ROCK
Links: no of common neighbors
C1 <a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e},
{a, d, e}, {b, c, d}, {b, c, e}, {b, d, e}, {c, d, e}
C2 <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Let T1 = {a, b, c}, T2 = {c, d, e}, T3 = {a, b, f}
link(T1, T2) = 4, since they have 4 common neighbors
{a, c, d}, {a, c, e}, {b, c, d}, {b, c, e}
link(T1, T3) = 3, since they have 3 common neighbors
{a, b, d}, {a, b, e}, {a, b, g}
Thus link is a better measure than Jaccard coefficient
CHAMELEON: Hierarchical Clustering
Using Dynamic Modeling (1999)
Measures the similarity based on a dynamic model
• Two clusters are merged only if the interconnectivity and
closeness (proximity) between two clusters are high relative to
the internal interconnectivity of the clusters and closeness of
items within the clusters
• Cure ignores information about interconnectivity of the
objects, Rock ignores information about the closeness of two
clusters
A two-phase algorithm
1. Use a graph partitioning algorithm: cluster objects into a large
number of relatively small sub-clusters
2. Use an agglomerative hierarchical clustering algorithm: find the
genuine clusters by repeatedly combining these sub-clusters
Overall Framework of
CHAMELEON
Construct
Sparse Graph Partition the Graph
Data Set
Merge Partition
Final Clusters
April 18, 2013 Data Mining: Concepts and Tec5h6niques
CHAMELEON (Clustering Complex Objects)