Image-Based Face Recognition
using Global Features
Xiaoyin xu
Research Centre for Integrated Microsystems
Electrical and Computer Engineering
University of Windsor
Supervisors: Dr. Ahmadi
May 13, 2005
Outline
Face recognition
Preprocessing
Recognition technology:
Feature-based vs. Holistic methods
Feature-based matching
Holistic matching
Eigenfaces
Fisher’s Linear Discriminant (FLD)
Laplacianfaces
Hybrid method
Future work
Summary
Face recognition
A formal method first proposed by Francis Galton in
1888
A growing interest since 1990
Research interest has grown:
Increasing commercial opportunities
Availability of better hardware, allowing real-time
applications
The increasing importance of surveillance-related
applications
Great improvements have been made in the
design of classifiers
Face recognition
Why face recognition?
Verification of credit card, personal ID, passport
Bank or store security
Crowd surveillance
Access control
Human-computer-interaction
Face recognition
Evaluation of performance :
Precision of matching (Recognition rate)
Resistance against adverse factors (noise, facial
expression…)
Computational complexity
Cost of the equipment
Face recognition: Procedure
Input face image
Face feature
extraction
Face
database Feature Matching Decision maker
Output result
Preprocessing
Several preprocessing might be needed:
Segmentation:
Eliminate the background
Scaling:
Performance decreases quickly if the scale is
misjudged
Rotation:
Symmetry operator to estimate head orientation
Recognition technology
Three matching methods:
Feature-based (structural) matching: Local features
such as the eyes, nose, and mouth
------> easily affected by irrelevant information
Holistic matching: Use the whole face region as the
raw input (PCA, LDA, ICA…)
Each face image is
transformed into a
vector
Hybrid method: Use both
Recognition technology
Feature-based VS. Holistic methods
Feature-based methods Holistic methods
Local features Global properties
Have more practical value Complex algorithm, long
and simpler training or special conditions
Accuracy problem Storage problem
Allow perspective Also allow perspective
variation variation, better performance
Need accurate feature Accurate feature location
location improves the performance
Recognition technology:
Feature-based matching
Find the locations of eyes, nose and mouth, extract
the feature points
Use the width of head, the
distances between eye corners,
angles between eye corners, etc.
Try to find invariant features
Recognition technology:
Feature-based matching
Algorithm:
Extracting feature points
---->affected by head orientation
Define cross ratio of any four points on
a line
----> Invariant distances
Correct the location of feature points
---->apply symmetry and cross ratio
The normalized feature vector:
N= F
|| F ||
Similarity measure: Euclidean distance
Recognition technology:
Holistic matching
One of the most successful and well-studied
technique
------->holistic matching
Represent an image xi of N pixels by a vector N*1
in an N-dimensional space
------->too large for robust and fast FR
Use dimensionality reduction techniques
Recognition technology:
Holistic matching
Find a set of transformation vectors (displayed as
feature images), put them into W of size N*d
------>define the face subspace
Project the face images onto the “face subspace”
------> yi =W T xi , size of yi is d*1
Holistic matching: Eigenfaces
One of the best global representation
Central idea:
Find a weighted combination of
a small number of transformation
vectors that can approximate any face
in the face database Æ Eigenfaces
An image can be reduced to
a lower dimension ÆProjection
Objective function, maximize the
variation:
n
max ∑( y − y)2
i=1
Holistic matching: Eigenfaces
Algorithm:
The covariance matrix: Ω= XX T
The principal components are the eigenvectors E
of Ω
ΩE =∆E
Truncate E Æ projection matrix Ed
The projection of an image:
y ' = Ed ×( y − xu )
A new image is recognized using a nearest
neighbor classifier in a Eigenface subspace.
Holistic matching: Eigenfaces
Classify a new face as the person with the closest
distance
Recognition accuracy increases with number of
eigenfaces until 25
Additional eigenfaces do not help much with recognition
Best recognition rates
Test set 90%
Holistic matching: Eigenfaces
Run-time performance is very good
Construction: computationally intense, but need to be
done infrequently
Fair robustness to facial distortions, pose and lighting
conditions
Need to rebuild the eigenspace if adding a new
person
Start to break down when there are too many classes
Retains unwanted variations due to lighting and facial
expression
Holistic matching:
Fisher’s Linear Discriminant (FLD)
Eigenfaces achieves larger total variance, FLD
achieves greater between-class variance, and,
consequently, classification is simplified.
FLD tries to project away variations in lighting and
facial expression while maintaining discriminability.
It maximizes the ratio of between-class variance to
that of within-class variance.
Holistic matching:
Fisher’s linear discriminant
Fisherface seeks directions that are efficient for
discrimination between the data.
Class A
Class B
Holistic matching:
Laplacianfaces
Laplacianfaces method aims to preserve the local
information.
Unwanted variations can be eliminated or reduced.
Eigenfaces
Fisherfaces
Laplacianfaces
Holistic matching:
Laplacianfaces
Take advantage of more training samples, which is
important to the real-world face recognition system
More discriminating information in the low-
dimensional face subspace
Better and more sophisticated distance metric:
variance-normalized distance
Recognition technology:
Hybrid method
Human perception system: use both local features
and the whole face region to recognize a face
The modular eigenfaces approach:
Global eigenfaces
Local eigenfeatures: eigeneyes, eigenmouth, etc.
Useful when gross variations present
Arbitrate the use of holistic and local features
Future work
Implementation and detailed study of the novel
algorithmÆ Laplacianfaces
Provide the system with an accurate feature-
localization mechanism
Try to combine the global feature with local feature
Compare the performance of different classifiers,
besides the nearest-neighbor classifier
Evaluate the performance of the three systems on
different face databases
Summary
Face recognition:
How to model face variation under realistic settings
Without accurate location of important features, good
performance can not be achieved
Shortcomings of current algorithms:
Large amounts of storage needed
Good quality images needed
Sensitive to uneven illumination
Affected by pose and head orientation
References
[1] M. Turk and A.P. Pentland, “Face Recognition Using Eigenfaces,”IEEE Conf. Computer
Vision and Pattern Recognition, 1991.
[2] R. Duda, P.Hart, D. Stork, “Pattern Classification”, ISBN 0-471-05669-3
[3] M.S. Kamel, H.C. Shen, A.K.C. Wong, R.I. Campeanu, “System for the recognition of human
faces”, IBM System Journal Vol.32, No.2, 1993.
[4] BELHUMEUR, P. N., HESPANHA, J. P., AND KRIEGMAN, D.J. 1997. Eigenfaces vs.
Fisherfaces: Recognition using class specific linear projection. IEEE Trans. Patt. Anal.
Mach. Intell. 19, 711–720.
[5] COX, I. J., GHOSN, J., AND YIANILOS, P. N. 1996.Feature-based face recognition using
mixture distance. In Proceedings, IEEE Conference on Computer Vision and Pattern
Recognition. 209–216.
[6] KIRBY, M. AND SIROVICH, L. 1990. Application of the Karhunen-Loeve procedure for the
characterization of human faces. IEEE Trans. Patt. Anal. Mach. Intell. 12.
[7] Xiaofei He, Shuicheng Yan, Yuxiao Hu, Partha Niyogi, and Hong-Jiang Zhang,, “Face
Recognition Using Laplacianfaces”, IEEE Trans. Patt. Anal. Mach. Intell, VOL. 27, NO. 3,
MARCH 2005
[8] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Fisherfaces:
Recognition Using Class Specific Linear Projection,”
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July
1997.
[9] A.M. Martinez and A.C. Kak, “PCA versus LDA,” IEEE Trans. Pattern Analysis and Machine
Intelligence, vol. 23, no. 2, pp. 228-233,
Feb. 2001.
[10] Marian Stewart Bartlett, Javier R. Movellan, and Terrence J. Sejnowski, IEEE
TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 6, NOVEMBER 2002