A Simple Introduction to Support Vector
Machines
Martin Law
Lecture for CSE 802
Department of Computer Science and Engineering
Michigan State University
Outline
A brief history of SVM
Large-margin linear classifier
Linear separable
Nonlinear separable
Creating nonlinear classifiers: kernel trick
A simple example
Discussion on SVM
Conclusion
3/1/11
CSE 802. Prepared by Martin Law
History of SVM
SVM is related to statistical learning theory [3]
SVM was first introduced in 1992 [1]
SVM becomes popular because of its success in
handwritten digit recognition
1.1% test error rate for SVM. This is the same as the error
rates of a carefully constructed neural network, LeNet 4.
See Section 5.11 in [2] or the discussion in [3] for details
SVM is now regarded as an important example of kernel
methods, one of the key area in machine learning
Note: the meaning of kernel is different from the kernel
function for Parzen windows
[1] B.E. Boser et al. A Training Algorithm for Optimal Margin Classifiers. Proceedings of the Fifth Annual Workshop on
Computational Learning Theory 5 144-152, Pittsburgh, 1992.
[2] L. Bottou et al. Comparison of classifier methods: a case study in handwritten digit recognition. Proceedings of the 12th
IAPR International Conference on Pattern Recognition, vol. 2, pp. 77-82.
[3] V. Vapnik. The Nature of Statistical Learning Theory. 2nd edition, Springer, 1999.
3/1/11
CSE 802. Prepared by Martin Law
What is a good Decision Boundary?
Consider a two-class, linearly
separable classification problem
Many decision boundaries!
Class 2
The Perceptron algorithm can be
used to find such a boundary
Different algorithms have been
proposed (DHS ch. 5)
Are all decision boundaries
equally good?
3/1/11
Class 1
CSE 802. Prepared by Martin Law
Examples of Bad Decision Boundaries
Class 2
Class 1
3/1/11
Class 2
Class 1
CSE 802. Prepared by Martin Law
Large-margin Decision Boundary
The decision boundary should be as far away from the
data of both classes as possible
We should maximize the margin, m
t
Distance between the origin and the line w x=k is k/||w||
Class 2
Class 1
3/1/11
CSE 802. Prepared by Martin Law
Finding the Decision Boundary
Let {x1, ..., xn} be our data set and let yi {1,-1} be
the class label of xi
The decision boundary should classify all points correctly
The decision boundary can be found by solving the
following constrained optimization problem
This is a constrained optimization problem. Solving it
requires some new tools
Feel free to ignore the following several slides; what is
important is the constrained optimization problem above
3/1/11
CSE 802. Prepared by Martin Law
Recap of Constrained Optimization
Suppose we want to: minimize f(x) subject to g(x) = 0
A necessary condition for x0 to be a solution:
: the Lagrange multiplier
For multiple constraints gi(x) = 0, i=1, , m, we need a
Lagrange multiplier i for each of the constraints
3/1/11
CSE 802. Prepared by Martin Law
Recap of Constrained Optimization
The case for inequality constraint gi(x)0 is similar,
except that the Lagrange multiplier i should be positive
If x0 is a solution to the constrained optimization
problem
There must exist i0 for i=1, , m such that x0 satisfy
The function
is also known as the
Lagrangrian; we want to set its gradient to 0
3/1/11
CSE 802. Prepared by Martin Law
Back to the Original Problem
The Lagrangian is
Note that ||w||2 = wTw
Setting the gradient of
have
3/1/11
w.r.t. w and b to zero, we
CSE 802. Prepared by Martin Law
10
The Dual Problem
If we substitute
to
, we have
Note that
This is a function of i only
3/1/11
CSE 802. Prepared by Martin Law
11
The Dual Problem
The new objective function is in terms of i only
It is known as the dual problem: if we know w, we know
all i; if we know all i, we know w
The original problem is known as the primal problem
The objective function of the dual problem needs to be
maximized!
The dual problem is therefore:
Properties of i when we introduce
the Lagrange multipliers
3/1/11
The result when we differentiate the
original Lagrangian w.r.t. b
CSE 802. Prepared by Martin Law
12
The Dual Problem
This is a quadratic programming (QP) problem
A global maximum of i can always be found
w can be recovered by
3/1/11
CSE 802. Prepared by Martin Law
13
Characteristics of the Solution
Many of the i are zero
w is a linear combination of a small number of data points
This sparse representation can be viewed as data
compression as in the construction of knn classifier
x with non-zero i are called support vectors (SV)
The decision boundary is determined only by the SV
Let tj ( j=1, ..., s) be the indices of the s support vectors. We
can write
For testing with a new data z
i
Compute
and
classify z as class 1 if the sum is positive, and class 2
otherwise
Note: w need not be formed explicitly
3/1/11
CSE 802. Prepared by Martin Law
14
The Quadratic Programming Problem
Many approaches have been proposed
Loqo, cplex, etc. (see http://www.numerical.rl.ac.uk/qp/qp.html)
Most are interior-point methods
Start with an initial solution that can violate the constraints
Improve this solution by optimizing the objective function
and/or reducing the amount of constraint violation
For SVM, sequential minimal optimization (SMO) seems
to be the most popular
A QP with two variables is trivial to solve
Each iteration of SMO picks a pair of (i,j) and solve the
QP with these two variables; repeat until convergence
In practice, we can just regard the QP solver as a blackbox without bothering how it works
3/1/11
CSE 802. Prepared by Martin Law
15
A Geometrical Interpretation
Class 2
8=0.6 10=0
7=0
5=0
4=0
9=0
Class 1
3/1/11
2=0
1=0.8
6=1.4
3=0
CSE 802. Prepared by Martin Law
16
Non-linearly Separable Problems
We allow error i in classification; it is based on the
output of the discriminant function wTx+b
i approximates the number of misclassified samples
Class 2
Class 1
3/1/11
CSE 802. Prepared by Martin Law
17
Soft Margin Hyperplane
If we minimize ii, i can be computed by
i are slack variables in optimization
Note that i=0 if there is no error for xi
i is an upper bound of the number of errors
We want to minimize
C : tradeoff parameter between error and margin
The optimization problem becomes
3/1/11
CSE 802. Prepared by Martin Law
18
The Optimization Problem
The dual of this new constrained optimization problem is
w is recovered as
This is very similar to the optimization problem in the
linear separable case, except that there is an upper
bound C on i now
Once again, a QP solver can be used to find i
3/1/11
CSE 802. Prepared by Martin Law
19
Extension to Non-linear Decision Boundary
So far, we have only considered large-margin classifier
with a linear decision boundary
How to generalize it to become nonlinear?
Key idea: transform xi to a higher dimensional space to
make life easier
Input space: the space the point xi are located
Feature space: the space of (xi) after transformation
Why transform?
Linear operation in the feature space is equivalent to nonlinear operation in input space
Classification can become easier with a proper
transformation. In the XOR problem, for example, adding a
new feature of x1x2 make the problem linearly separable
3/1/11
CSE 802. Prepared by Martin Law
20
Transforming the Data (c.f. DHS Ch. 5)
(.)
Input space
( )
( )
( )
( ) ( ) ( )
( )
( )
( )
( ) ( )
( ) ( )
( )
( ) ( )
( )
( )
Feature space
Note: feature space is of higher dimension
than the input space in practice
Computation in the feature space can be costly because it is
high dimensional
The feature space is typically infinite-dimensional!
The kernel trick comes to rescue
3/1/11
CSE 802. Prepared by Martin Law
21
The Kernel Trick
Recall the SVM optimization problem
The data points only appear as inner product
As long as we can calculate the inner product in the
feature space, we do not need the mapping explicitly
Many common geometric operations (angles, distances)
can be expressed by inner products
Define the kernel function K by
3/1/11
CSE 802. Prepared by Martin Law
22
An Example for (.) and K(.,.)
Suppose (.) is given as follows
An inner product in the feature space is
So, if we define the kernel function as follows, there is
no need to carry out (.) explicitly
This use of kernel function to avoid carrying out (.)
explicitly is known as the kernel trick
3/1/11
CSE 802. Prepared by Martin Law
23
Kernel Functions
In practical use of SVM, the user specifies the kernel
function; the transformation (.) is not explicitly stated
Given a kernel function K(xi, xj), the transformation (.)
is given by its eigenfunctions (a concept in functional
analysis)
Eigenfunctions can be difficult to construct explicitly
This is why people only specify the kernel function without
worrying about the exact transformation
Another view: kernel function, being an inner product, is
really a similarity measure between the objects
3/1/11
CSE 802. Prepared by Martin Law
24
Examples of Kernel Functions
Polynomial kernel with degree d
Radial basis function kernel with width
Closely related to radial basis function neural networks
The feature space is infinite-dimensional
Sigmoid with parameter and
It does not satisfy the Mercer condition on all and
3/1/11
CSE 802. Prepared by Martin Law
25
Modification Due to Kernel Function
Change all inner products to kernel functions
For training,
Original
With kernel
function
3/1/11
CSE 802. Prepared by Martin Law
26
Modification Due to Kernel Function
For testing, the new data z is classified as class 1 if f 0,
and as class 2 if f <0
Original
With kernel
function
3/1/11
CSE 802. Prepared by Martin Law
27
More on Kernel Functions
Since the training of SVM only requires the value of K(xi,
xj), there is no restriction of the form of xi and xj
xi can be a sequence or a tree, instead of a feature vector
K(xi, xj) is just a similarity measure comparing xi and xj
For a test object z, the discriminat function essentially is
a weighted sum of the similarity between z and a preselected set of objects (the support vectors)
3/1/11
CSE 802. Prepared by Martin Law
28
More on Kernel Functions
Not all similarity measure can be used as kernel
function, however
The kernel function needs to satisfy the Mercer function,
i.e., the function is positive-definite
This implies that the n by n kernel matrix, in which the (i,j)th entry is the K(xi, xj), is always positive definite
This also means that the QP is convex and can be solved in
polynomial time
3/1/11
CSE 802. Prepared by Martin Law
29
Example
Suppose we have 5 1D data points
x1=1, x2=2, x3=4, x4=5, x5=6, with 1, 2, 6 as class 1 and 4,
5 as class 2 y1=1, y2=1, y3=-1, y4=-1, y5=1
We use the polynomial kernel of degree 2
K(x,y) = (xy+1)2
C is set to 100
We first find i (i=1, , 5) by
3/1/11
CSE 802. Prepared by Martin Law
30
Example
By using a QP solver, we get
1=0, 2=2.5, 3=0, 4=7.333, 5=4.833
Note that the constraints are indeed satisfied
The support vectors are {x2=2, x4=5, x5=6}
The discriminant function is
b is recovered by solving f(2)=1 or by f(5)=-1 or by f(6)=1,
as x2 and x5 lie on the line
and x4
lies on the line
All three give b=9
3/1/11
CSE 802. Prepared by Martin Law
31
Example
Value of discriminant function
class 1
3/1/11
class 1
class 2
2
CSE 802. Prepared by Martin Law
32
Why SVM Work?
The feature space is often very high dimensional. Why
dont we have the curse of dimensionality?
A classifier in a high-dimensional space has many
parameters and is hard to estimate
Vapnik argues that the fundamental problem is not the
number of parameters to be estimated. Rather, the
problem is about the flexibility of a classifier
Typically, a classifier with many parameters is very
flexible, but there are also exceptions
Let xi=10i where i ranges from 1 to n. The classifier
can classify all xi correctly for all possible
combination of class labels on xi
This 1-parameter classifier is very flexible
3/1/11
CSE 802. Prepared by Martin Law
33
Why SVM works?
Vapnik argues that the flexibility of a classifier should not
be characterized by the number of parameters, but by
the flexibility (capacity) of a classifier
This is formalized by the VC-dimension of a classifier
Consider a linear classifier in two-dimensional space
If we have three training data points, no matter how
those points are labeled, we can classify them perfectly
3/1/11
CSE 802. Prepared by Martin Law
34
VC-dimension
However, if we have four points, we can find a labeling
such that the linear classifier fails to be perfect
We can see that 3 is the critical number
The VC-dimension of a linear classifier in a 2D space is 3
because, if we have 3 points in the training set, perfect
classification is always possible irrespective of the
labeling, whereas for 4 points, perfect classification can
be impossible
3/1/11
CSE 802. Prepared by Martin Law
35
VC-dimension
The VC-dimension of the nearest neighbor classifier is
infinity, because no matter how many points you have,
you get perfect classification on training data
The higher the VC-dimension, the more flexible a
classifier is
VC-dimension, however, is a theoretical concept; the VCdimension of most classifiers, in practice, is difficult to be
computed exactly
Qualitatively, if we think a classifier is flexible, it probably
has a high VC-dimension
3/1/11
CSE 802. Prepared by Martin Law
36
Structural Risk Minimization (SRM)
A fancy term, but it simply means: we should find a
classifier that minimizes the sum of training error
(empirical risk) and a term that is a function of the
flexibility of the classifier (model complexity)
Recall the concept of confidence interval (CI)
For example, we are 99% confident that the population
mean lies in the 99% CI estimated from a sample
We can also construct a CI for the generalization error
(error on the test set)
3/1/11
CSE 802. Prepared by Martin Law
37
Structural Risk Minimization (SRM)
Increasing
error rate
Training error
Training error
CI of test error
for classifier 2
CI of test error
for classifier 1
SRM prefers classifier 2 although it has a higher
training error, because the upper limit of CI is smaller
3/1/11
CSE 802. Prepared by Martin Law
38
Structural Risk Minimization (SRM)
It can be proved that the more flexible a classifier, the
wider the CI is
The width can be upper-bounded by a function of the
VC-dimension of the classifier
In practice, the confidence interval of the testing error
contains [0,1] and hence is trivial
Empirically, minimizing the upper bound is still useful
The two classifiers are often nested, i.e., one classifier
is a special case of the other
SVM can be viewed as implementing SRM because i i
approximates the training error; ||w||2 is related to
the VC-dimension of the resulting classifier
See http://www.svms.org/srm/ for more details
3/1/11
CSE 802. Prepared by Martin Law
39
Justification of SVM
Large margin classifier
SRM
Ridge regression: the term ||w||2 shrinks the
parameters towards zero to avoid overfitting
The term the term ||w||2 can also be viewed as
imposing a weight-decay prior on the weight vector, and
we find the MAP estimate
3/1/11
CSE 802. Prepared by Martin Law
40
Choosing the Kernel Function
Probably the most tricky part of using SVM.
The kernel function is important because it creates the
kernel matrix, which summarizes all the data
Many principles have been proposed (diffusion kernel,
Fisher kernel, string kernel, )
There is even research to estimate the kernel matrix
from available information
In practice, a low degree polynomial kernel or RBF
kernel with a reasonable width is a good initial try
Note that SVM with RBF kernel is closely related to RBF
neural networks, with the centers of the radial basis
functions automatically chosen for SVM
3/1/11
CSE 802. Prepared by Martin Law
41
Other Aspects of SVM
How to use SVM for multi-class classification?
One can change the QP formulation to become multi-class
More often, multiple binary classifiers are combined
See DHS 5.2.2 for some discussion
One can train multiple one-versus-all classifiers, or combine
multiple pairwise classifiers intelligently
How to interpret the SVM discriminant function value as
probability?
By performing logistic regression on the SVM output of a
set of data (validation set) that is not used for training
Some SVM software (like libsvm) have these features
built-in
3/1/11
CSE 802. Prepared by Martin Law
42
Software
A list of SVM implementation can be found at http://
www.kernel-machines.org/software.html
Some implementation (such as LIBSVM) can handle
multi-class classification
SVMLight is among one of the earliest implementation of
SVM
Several Matlab toolboxes for SVM are also available
3/1/11
CSE 802. Prepared by Martin Law
43
Summary: Steps for Classification
Prepare the pattern matrix
Select the kernel function to use
Select the parameter of the kernel function and the
value of C
You can use the values suggested by the SVM software, or
you can set apart a validation set to determine the values
of the parameter
Execute the training algorithm and obtain the i
Unseen data can be classified using the i and the
support vectors
3/1/11
CSE 802. Prepared by Martin Law
44
Strengths and Weaknesses of SVM
Strengths
Training is relatively easy
No local optimal, unlike in neural networks
It scales relatively well to high dimensional data
Tradeoff between classifier complexity and error can be
controlled explicitly
Non-traditional data like strings and trees can be used as
input to SVM, instead of feature vectors
Weaknesses
Need to choose a good kernel function.
3/1/11
CSE 802. Prepared by Martin Law
45
Other Types of Kernel Methods
A lesson learnt in SVM: a linear algorithm in the feature
space is equivalent to a non-linear algorithm in the input
space
Standard linear algorithms can be generalized to its nonlinear version by going to the feature space
Kernel principal component analysis, kernel independent
component analysis, kernel canonical correlation analysis,
kernel k-means, 1-class SVM are some examples
3/1/11
CSE 802. Prepared by Martin Law
46
Conclusion
SVM is a useful alternative to neural networks
Two key concepts of SVM: maximize the margin and the
kernel trick
Many SVM implementations are available on the web for
you to try on your data set!
3/1/11
CSE 802. Prepared by Martin Law
47
Resources
http://www.kernel-machines.org/
http://www.support-vector.net/
http://www.support-vector.net/icml-tutorial.pdf
http://www.kernel-machines.org/papers/tutorialnips.ps.gz
http://www.clopinet.com/isabelle/Projects/SVM/
applist.html
3/1/11
CSE 802. Prepared by Martin Law
48
3/1/11
CSE 802. Prepared by Martin Law
49
3/1/11
CSE 802. Prepared by Martin Law
50
Demonstration
Iris data set
Class 1 and class 3 are merged in this demo
3/1/11
CSE 802. Prepared by Martin Law
51
Example of SVM Applications: Handwriting
Recognition
3/1/11
CSE 802. Prepared by Martin Law
52
Multi-class Classification
SVM is basically a two-class classifier
One can change the QP formulation to allow multi-class
classification
More commonly, the data set is divided into two parts
intelligently in different ways and a separate SVM is
trained for each way of division
Multi-class classification is done by combining the output
of all the SVM classifiers
Majority rule
Error correcting code
Directed acyclic graph
3/1/11
CSE 802. Prepared by Martin Law
53
Epsilon Support Vector Regression
(-SVR)
Linear regression in feature space
Unlike in least square regression, the error function is insensitive loss function
Intuitively, mistake less than is ignored
This leads to sparsity similar to SVM
-insensitive loss function
Square loss function
Penalty
-
3/1/11
Penalty
Value off
target
CSE 802. Prepared by Martin Law
Value off
target
54
Epsilon Support Vector Regression
(-SVR)
Given: a data set {x1, ..., xn} with target values {u1, ...,
un}, we want to do -SVR
The optimization problem is
Similar to SVM, this can be solved as a quadratic
programming problem
3/1/11
CSE 802. Prepared by Martin Law
55
Epsilon Support Vector Regression
(-SVR)
C is a parameter to control the amount of influence of
the error
The ||w||2 term serves as controlling the complexity of
the regression function
This is similar to ridge regression
After training (solving the QP), we get values of i and
i*, which are both zero if xi does not contribute to the
error function
For a new data z,
3/1/11
CSE 802. Prepared by Martin Law
56