Clustering Methods
a critical step towards
analyzing a model
Dr. Sukhvinder Singh Deora,
Department of Computer Science and Applications
Maharshi Dayanand University, Rohtak
High Dimensional Data
High dim. Graph Infinite Machine
Apps
data data data learning
Locality Filtering
PageRank, Recommen
sensitive data SVM
SimRank der systems
hashing streams
Community Web Decision Association
Clustering
Detection advertising Trees Rules
Dimensional Duplicate
Spam Queries on Perceptron,
ity document
Detection streams kNN
reduction detection
2
High Dimensional Data
• Given a cloud of data points we want to understand its structure
3
Some useful Definitions and Axioms
• Space: a set of points.
• Distance measure: a function d(x, y) that takes two points in the
space as arguments and produces a real number, and satisfies the
following axioms:
1. d(x, y) ≥ 0 (no negative distances).
2. d(x, y) = 0 if and only if x = y (distances are positive, except for the
distance from a point to itself).
3. d(x, y) = d(y, x) (distance is symmetric).
4. d(x, y) ≤ d(x, z) + d(z, y) (the triangle inequality).
The triangle inequality is the most complex condition.
4
The Problem of Clustering
• Given a set of points, with a notion of distance between points, group the
points into some number of clusters, so that
• Members of a cluster are close/similar to each other
• Members of different clusters are dissimilar
• Usually:
• Points are in a high-dimensional space
• Similarity is defined using a distance measure
• Euclidean, Cosine, Jaccard, edit distance, …
5
Example: Clusters & Outliers
x
x
xx x
x x
x x x x x
x x x x x
x xx x xx x
x x x x
x x
x x x
x x x x
x x x
x
Outlier Cluster
6
Cosine, Jaccard, and Euclidean
• As with CDs we have a choice when we think of documents as
sets of words or shingles:
• Sets as vectors: Measure similarity by the cosine distance
Euclidean spaces where points with integer or boolean (0 or 1)
components. points may be thought of as directions. Cosine distance
between two points is the angle between vectors to points. Angle
range (0 to 180 )degrees, regardless of number of dimensions the
space.
• Sets as sets: Measure similarity by the Jaccard distance
Jaccard similarity of sets S and T is |S ∩ T |/|S ∪ T |, that is, the
ratio of the size of the intersection of S and T to the size of their
union
• Sets as points: Measure similarity by Euclidean distance
L1-norm, or Manhattan distance because it is the distance one would have to travel
between points if one were constrained to travel along grid lines, as on the streets 7of
a city such as Manhattan
Answer a Simple Question
• Consider the two-dimensional Euclidean space (the customary plane) and
the points (2, 7) and (6, 4). Reply in the chat/msg box of Google Meet
1. Compute L2-norm distance
2. Compute L1-norm distance
3. The L∞-norm gives a distance of max(|x1 − x2|, |y1 − y2|)
Wait to get answers ! Reply in the chat/msg box of Google Meet
• The L2-norm gives a distance of p (2 − 6)2 + (7 − 4)2 = √ 4 2 + 32 = 5.
• The L1-norm gives a distance of |2 − 6| + |7 − 4| = 4 + 3 = 7.
• The L∞-norm gives a distance of max(|2 − 6|, |7 − 4|) = max(4, 3) = 4
8
Clustering is a hard problem!
9
Why is it hard?
• Clustering in two dimensions looks easy
• Clustering small amounts of data looks easy
• And in most cases, looks are not deceiving
• Many applications involve not 2, but 10 or 10,000 dimensions
• High-dimensional spaces look different: Almost all pairs of points are
at about the same distance
10
Clustering Problem: Galaxies
• A catalog of 2 billion “sky objects” represents objects by their
radiation in 7 dimensions (frequency bands)
• Problem: Cluster into similar objects, e.g., galaxies, nearby stars,
quasars, etc.
• Sloan Digital Sky Survey
11
Twitter & Facebook
• Discovering social circles, circles of trust:
12
[McAuley, Leskovec: Discovering social circles in networks, 2012]
Clustering Problem: Music CDs
• Intuitively: Music divides into categories, and customers prefer a few
categories
• But what are categories really?
• Represent a CD by a set of customers who bought it:
• Similar CDs have similar sets of customers, and vice-versa
13
Clustering Problem: Music CDs
Space of all CDs:
• Think of a space with one dim. for each customer
• Values in a dimension may be 0 or 1 only
• A CD is a point in this space (x1, x2,…, xk),
where xi = 1 iff the i th customer bought the CD
• For Amazon, the dimension is tens of millions
• Task: Find clusters of similar CDs
14
Clustering Problem: Documents
Finding topics:
• Represent a document by a vector
(x1, x2,…, xk), where xi = 1 iff the i th word
(in some order) appears in the document
• It actually doesn’t matter if k is infinite; i.e., we don’t limit the set of words
• Documents with similar sets of words may be about the same topic
15
Overview: Methods of Clustering
• Hierarchical:
• Agglomerative (bottom up):
• Initially, each point is a cluster
• Repeatedly combine the two
“nearest” clusters into one
• Divisive (top down):
• Start with one cluster and recursively split it
• Point assignment:
• Maintain a set of clusters
• Points belong to “nearest” cluster
16
Hierarchical Clustering
• Key operation:
Repeatedly combine
two nearest clusters
• Three important questions:
• 1) How do you represent a cluster of more
than one point?
• 2) How do you determine the “nearness” of clusters?
• 3) When to stop combining clusters?
17
Hierarchical Clustering
• Key operation: Repeatedly combine two nearest clusters
• (1) How to represent a cluster of many points?
• Key problem: As you merge clusters, how do you represent the “location” of each
cluster, to tell which pair of clusters is closest?
• Euclidean case: each cluster has a
centroid = average of its (data)points
• (2) How to determine “nearness” of clusters?
• Measure cluster distances by distances of centroids
18
Example: Hierarchical clustering
(5,3)
o
(1,2)
o
x (1.5,1.5) x (4.7,1.3)
x (1,1) o (2,1) o (4,1)
x (4.5,0.5)
o (0,0) o (5,0)
Data:
o … data point
x … centroid
Dendrogram 19
And in the Non-Euclidean Case?
What about the Non-Euclidean case?
• The only “locations” we can talk about are the points themselves
• i.e., there is no “average” of two points
• Approach 1:
• (1) How to represent a cluster of many points?
clustroid = (data)point “closest” to other points
• (2) How do you determine the “nearness” of clusters? Treat clustroid as if it were
centroid, when computing inter-cluster distances
20
“Closest” Point?
• (1) How to represent a cluster of many points?
clustroid = point “closest” to other points
• Possible meanings of “closest”:
• Smallest maximum distance to other points
• Smallest average distance to other points
• Smallest sum of squares of distances to other points
• For distance metric d clustroid c of cluster C is:
min d ( x, c) 2
c
xC
Datapoint Centroid
X Centroid is the avg. of all (data)points
in the cluster. This means centroid is
Clustroid an “artificial” point.
Cluster on Clustroid is an existing (data)point
3 datapoints that is “closest” to all other points
21 in
the cluster.
Defining “Nearness” of Clusters
• (2) How do you determine the “nearness” of clusters?
• Approach 2:
Intercluster distance = minimum of the distances between any two points,
one from each cluster
• Approach 3:
Pick a notion of “cohesion” of clusters,
e.g., maximum distance from the clustroid
• Merge clusters whose union is most cohesive
22
Cohesion
• Approach 3.1: Use the diameter of the merged cluster = maximum
distance between points in the cluster
• Approach 3.2: Use the average distance between points in the
cluster
• Approach 3.3: Use a density-based approach
• Take the diameter or avg. distance, e.g., and divide by the number of points
in the cluster
23
Implementation
• Naïve implementation of hierarchical clustering:
• At each step, compute pairwise distances
between all pairs of clusters, then merge
• O(N3)
• Careful implementation using priority queue can reduce time to O(N2
log N)
• Still too expensive for really big datasets
that do not fit in memory
24
k-means clustering
k–means Algorithm(s)
• Assumes Euclidean space/distance
• Start by picking k, the number of clusters
• Initialize clusters by picking one point per cluster
• Example: Pick one point at random, then k-1 other points, each as far away
as possible from
the previous points
26
Populating Clusters
• 1) For each point, place it in the cluster whose current centroid it is
nearest
• 2) After all points are assigned, update the locations of centroids of the k
clusters
• 3) Reassign all points to their closest centroid
• Sometimes moves points between clusters
• Repeat 2 and 3 until convergence
• Convergence: Points don’t move between clusters and centroids stabilize
27
Example: Assigning Clusters
x
x
x
x
x
x x x x x x
x … data point
… centroid Clusters after round 1
28
Example: Assigning Clusters
x
x
x
x
x
x x x x x x
x … data point
… centroid Clusters after round 2
29
Example: Assigning Clusters
x
x
x
x
x
x x x x x x
x … data point
… centroid Clusters at the end
30
Getting the k right
How to select k?
• Try different k, looking at the change in the average distance to
centroid as k increases
• Average falls rapidly until right k, then changes little
Best value
of k
Average
distance to
centroid k
31
Example: Picking k
Too few; x
many long x
xx x
distances
x x
to centroid. x x x x x
x x x x x
x xx x xx x
x x x x
x x
x x x
x x x x
x x x
x
32
Example: Picking k
x
Just right; x
distances xx x
rather short. x x
x x x x x
x x x x x
x xx x xx x
x x x x
x x
x x x
x x x x
x x x
x
33
Example: Picking k
Too many; x
little improvement x
in average xx x
distance. x x
x x x x x
x x x x x
x xx x xx x
x x x x
x x
x x x
x x x x
x x x
x
34
K-means Clustering
Iteration wise diagrammatic view:
Use this First one
https
://www.dominodatalab.com/hs-fs/hubfs/Imported_Blog_Media/K_
Means_Explained_JRS.gif?width=715&name=K_Means_Explai
ned_JRS.gif
https://
files.realpython.com/media/centroids_iterations.247379590275.g
if
35
The BFR Algorithm
Extension of k-means to large data
BFR Algorithm
• BFR [Bradley-Fayyad-Reina] is a
variant of k-means designed to
handle very large (disk-resident) data sets
• Assumes that clusters are normally distributed around a centroid in a
Euclidean space
• Standard deviations in different
dimensions may vary
• Clusters are axis-aligned ellipses
• Efficient way to summarize clusters
(want memory required O(clusters) and not O(data))
37
BFR Algorithm
• Points are read from disk one main-memory-full at a time
• Most points from previous memory loads are summarized by simple
statistics
• To begin, from the initial load we select the initial k centroids by some
sensible approach:
• Take k random points
• Take a small random sample and cluster optimally
• Take a sample; pick a random point, and then
k–1 more points, each as far from the previously selected points as possible
38
Three Classes of Points
3 sets of points which we keep track of:
• Discard set (DS):
• Points close enough to a centroid to be summarized
• Compression set (CS):
• Groups of points that are close together but not close to any existing
centroid
• These points are summarized, but not assigned to a cluster
• Retained set (RS):
• Isolated points waiting to be assigned to a compression set
39
BFR: “Galaxies” Picture
Points in
the RS
Compressed sets.
Their points are in
the CS.
A cluster. Its points The centroid
are in the DS.
Discard set (DS): Close enough to a centroid to be summarized
Compression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
40
Summarizing Sets of Points
For each cluster, the discard set (DS) is summarized by:
• The number of points, N
• The vector SUM, whose ith component is the sum of the coordinates
of the points in the ith dimension
• The vector SUMSQ: ith component = sum of squares of coordinates in
ith dimension
A cluster.
All its points are in the DS. The centroid
41
Summarizing Points: Comments
• 2d + 1 values represent any size cluster
• d = number of dimensions
• Average in each dimension (the centroid)
can be calculated as SUMi / N
• SUMi = ith component of SUM
• Variance of a cluster’s discard set in dimension i is: (SUMSQi / N) –
(SUMi / N)2
• And standard deviation is the square root of that
• Next step: Actual clustering
Note: Dropping the “axis-aligned” clusters assumption would require
storing full covariance matrix to summarize the cluster. So, instead of
SUMSQ being a d-dim vector, it would be a d x d matrix, which is too
big!
42
The “Memory-Load” of Points
Processing the “Memory-Load” of points (1):
• 1) Find those points that are “sufficiently close” to a cluster centroid
and add those points to that cluster and the DS
• These points are so close to the centroid that
they can be summarized and then discarded
• 2) Use any main-memory clustering algorithm to cluster the
remaining points and the old RS
• Clusters go to the CS; outlying points to the RS
Discard set (DS): Close enough to a centroid to be summarized.
Compression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
43
The “Memory-Load” of Points
Processing the “Memory-Load” of points (2):
• 3) DS set: Adjust statistics of the clusters to account for the new points
• Add Ns, SUMs, SUMSQs
• 4) Consider merging compressed sets in the CS
• 5) If this is the last round, merge all compressed sets in the CS and all RS
points into their nearest cluster
Discard set (DS): Close enough to a centroid to be summarized.
Compression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
44
BFR: “Galaxies” Picture
Points in
the RS
Compressed sets.
Their points are in
the CS.
A cluster. Its points The centroid
are in the DS.
Discard set (DS): Close enough to a centroid to be summarized
Compression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
45
A Few Details…
• Q1) How do we decide if a point is “close enough” to a cluster that
we will add the point to that cluster?
• Q2) How do we decide whether two compressed sets (CS) deserve
to be combined into one?
46
How Close is Close Enough?
• Q1) We need a way to decide whether to put a new point into a
cluster (and discard)
• BFR suggests two ways:
• The Mahalanobis distance is less than a threshold
• High likelihood of the point belonging to currently nearest centroid
47
Mahalanobis Distance
• Normalized Euclidean distance from centroid
• For point (x1, …, xd) and centroid (c1, …, cd)
1. Normalize in each dimension: yi = (xi - ci) / i
2. Take sum of the squares of the yi
3. Take the square root
σi … standard deviation of points in
the cluster in the ith dimension
48
Mahalanobis Distance
• If clusters are normally distributed in d dimensions, then after
transformation, one standard deviation =
• i.e., 68% of the points of the cluster will
have a Mahalanobis distance
• Accept a point for a cluster if
its M.D. is < some threshold,
e.g. 2 standard deviations
49
Picture: Equal M.D. Regions
• Euclidean vs. Mahalanobis distance
Contours of equidistant points from the origin
Uniformly distributed points, Normally distributed points, Normally distributed points,
Euclidean distance Euclidean distance Mahalanobis distance
50
Should 2 CS clusters be combined?
Q2) Should 2 CS subclusters be combined?
• Compute the variance of the combined subcluster
• N, SUM, and SUMSQ allow us to make that calculation quickly
• Combine if the combined variance is
below some threshold
• Many alternatives: Treat dimensions differently, consider density
51
The CURE Algorithm
Extension of k-means to clusters
of arbitrary shapes
The CURE Algorithm
• Problem with BFR/k-means: Vs.
• Assumes clusters are normally
distributed in each dimension
• And axes are fixed – ellipses at
an angle are not OK
• CURE (Clustering Using REpresentatives):
• Assumes a Euclidean distance
• Allows clusters to assume any shape
• Uses a collection of representative
points to represent clusters
53
Example: Stanford Salaries
h h
h
e e
e
h e
e e h
e e e e
salary h
e
h
h
h h
h h h
age
54
Starting CURE
2 Pass algorithm. Pass 1:
• 0) Pick a random sample of points that fit in main memory
• 1) Initial clusters:
• Cluster these points hierarchically – group
nearest points/clusters
• 2) Pick representative points:
• For each cluster, pick a sample of points, as dispersed as possible
• From the sample, pick representatives by moving them (say) 20% toward the
centroid of the cluster
55
Example: Initial Clusters
h h
h
e e
e
h e
e e h
e e e e
h
salary e
h
h
h h
h h h
age
56
Example: Pick Dispersed Points
h h
h
e e
e
h e
e e h
e e e e
h
salary e Pick (say) 4
h
h remote points
h h for each
h h h cluster.
age
57
Example: Pick Dispersed Points
h h
h
e e
e
h e
e e h
e e e e
h
salary e Move points
h
h (say) 20%
h h toward the
h h h centroid.
age
58
Finishing CURE
Pass 2:
• Now, rescan the whole dataset and
visit each point p in the data set
• Place it in the “closest cluster”
• Normal definition of “closest”: p
Find the closest representative to p and
assign it to representative’s cluster
59
Summary
• Clustering: Given a set of points, with a notion of distance between points,
group the points into some number of clusters
• Algorithms:
• Agglomerative hierarchical clustering:
• Centroid and clustroid
• k-means:
• Initialization, picking k
• BFR
• CURE
60
References
University of Stanford
• http://www.mmds.org
• www.dominodatalab.com
• Books
61