SlideShare a Scribd company logo
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
23
LITERATURE SURVEY ON SPARSE
REPRESENTATION FOR NEURAL
NETWORK BASED FACE DETECTION AND
RECOGNITION
Raviraj Mane,Poorva Agrawal,
Nisha Auti CS Department SIT, Pune
ABSTRACT
Face detection and recognition is a challenging problem in the field of image processing. In this paper, we
reviewed some of the recent research works on face recognition. Issues with the previous face recognition
techniques are , time required is more for face recognition , recognition rate and database required to
store the data . To overcome these problems sparse representation based classifier technique can be used .
KEYWORDS
Sparse Representation, Neural Networks, Feature Extraction.
1. INTRODUCTION
Face recognition has attracted broad interests in the area of pattern recognition from the past 20
years. Face recognition is a critical issue in today’s world as it is quite easy for a human being,
but for the computer it is a difficult task due to high variability among the faces.
The process of face recognition involves comparing an image with a database of stored faces in
order to identify the individual in that input image. Simultaneously, numerous face representation
and classification methods are developed [2] that are based on neural networks.
Neural network is an interconnected group of artificial neurons that uses a mathematical model
for information processing. In order to reduce the storage requirements and improve the
performance [4] of a neural network system the Sparse Representation Classification (SRC)
method is used. The basic idea of SRC is to extract the minimum features on a face for face
recognition. Therefore it helps in increasing the performance and also reduces the database to
store the captured faces.
We are motivated to a great extent by the huge research literature in the area of face recognition
for addressing various issues by using neural networks. Aim of this paper is to present an
extended review on face recognition techniques.
This paper is organized as follows: In section II, we present the neural network approach for face
recognition. In section III, we present a detailed related research in the area of face recognition to
describe various methodologies. Face recognition techniques are divided into four different
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
24
broad categories. First category includes knowledge based methods which capture the knowledge
of faces. The second category includes various feature invariant methods, the third category
includes template matching methods based on creation of templates and the fourth category
includes research papers on appearance based methods. In Section IV, we conclude and elaborate
the future scope of face recognition.
2. SPARSE REPRESENTATION FOR NEURAL NETWORK
APPROACH
2.1 Sparse Representation classifier Algorithm
Let there are c pattern classes. Training samples of class i form the matrix Ai =[yi1, yi2, . . . ,
yiMi] ∈ R^(d×Mi) where Mi is the total number of training samples.y = Aw where y is given test
sample.
The sparse solution can be obtained by following equation
(L0) wˆ 0 = argmin ||w||0, subject to Aw = y (1)
The problem in equation (1) is NP hard, so if the solution wˆ 0 is sparse then problem can be
solved as
(L1) wˆ 1 = argmin ||w||1, subject to Aw = y (2)
In the input space training matrix be B = [B1,B2, . . . ,Bc] ∈ R^(N×M), where training samples of
class i form the matrix as Bi = [xi1, xi2, . . . , xiMi] ∈R^(N×Mi). Data point xij is mapped into
yij = P^T xij under a linear transformation. The matrix can be converted into the one in R^d , A =
P^T B.
Representation coefficient vector wij can be obtained by solving the optimization problem in (2).
With respect to class s,let δs(wi j ) be the representation coefficient vector . v^s ij = Aδs(wij ), s =
(1, . . . , c) ,this equation gives the prototype of class s. The distance between yij and class s is
defined as
ds(yij)=||yij-v^s(ij)||2
To achieve the better performance , between-class distance ds(xij) must be large and the within
class distance di(yij) must be small.
Within-class scatter can be defined as follows:
1/M ∑ij di(yij) =1/M ∑ij ||yij-v^i(ij)||2.
=1/M ∑ij (yij-v^i(ij))^T ((yij-v^i(ij))
=tr(Sw)
Between-class scatter can be defined as follows :
1/M(c-1) ∑ij ds(yij)=1/M(c-1) ∑ij )=||yij-v^s(ij)||2=tr(Sb).
Where tr represent trace operator. To achieve better result, maximize this function
J(p)=tr(Sb)/tr(Sw).
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
25
2.2 Steps of algorithm
1.Choose initial projection matrix P=P0 and let k=1.
2.The sparse representation coefficient vector [1] wij is calculated for each training sample in the
transformed space.
3.Construct [2] within class scatter matrix(Sw) and between class scatter matrix(Sb) . Calculate
the generalized eigen vectors of these two matrices corresponding to the largest eigen value to
form Pk.
4. Increment k by 1 and check the following condition.
[J(Pk)-J(Pk?1)]/J(Pk) < E(epsilon)
5. Repeat from 2 to 5, until the step 4 gives the correct value.
6. If result of 4 gives correct value, then P*=Pk.
The system authenticates the person by comparing the captured image with the stored face
images. Sparse representation classifier [1] method used to reduce the dimensionality of the data.
The theory of sparse representation helps to choose the training images to improve the
performance of a neural network system. Neural network consists of interconnected processing
elements called nodes or neurons that work together to produce an output function. Here output
function is the face recognition process from the database.
There are many algorithms implemented on face recognition but there are some issues which
are solved by using sparse representation concept which explained in above algorithm. Some of
the issues are performance of face recognition,recognition rate and the database required to store
faces.
3. RELATED RESEARCH WORK
Face detection methods are classified into following four different broad categories. They are:
(i) Knowledge based method
(ii) Feature Invariant method
(iii) Template matching method
(iv) Appearance based method.
3.1 KNOWLEDGE BASED METHODS
These are the rule-based methods. They try to capture our knowledge of faces such as symmetric
eyes and translate them into a set of rules. For example, rule becomes as, face has two symmetric
eyes, and usually eye area is darker than the cheeks.Usually, these rules capture the relationships
between features which are selected for face recognition.In following diagram we can see the
difference of facial regions.
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
26
Yang and Huang developed a system [17] that consists of three levels of rules. At the highest
level, all faces are scanned and a set of rules are formed. At level 2 edges are detected. At level 3
set of rules that respond to facial features such as the eyes and mouth are determined and faces
are detected. Kotropoulos and Pitas [18] also presented a rule- based method. In this method,
facial features are used to locate the boundary of a face. Subsequently, eyes, nose and the mouth
detection rules are used to detect the faces.
3.2 FEATURE INVARIANT METHODS
There exist features which are invariant over variabilities such as different poses and lighting
conditions. Facial features such as eyes, nose, mouth, and hair-line are extracted using edge
detectors and a statistical model is built to describe their relationships and to verify the faces.
3.2.1 FACIAL FEATURES
Leung et al. developed a method which is based on local feature detectors and random graph
matching [19]. Goal of this method is to find face pattern (two eyes, two nostrils, and nose/lip
junction). Facial features of the same type such as eyes are selected and their relative distance
is computed. Facial template is defined by Gaussian derivative filters which helps to detect
faces. Faces are matched by using Gaussian derivative filter response.
3.2.2 MULTIPLE FEATURES
Most of the methods utilize the features such as skin colour, hair colour, shape, and nose for face
recognition. Yachida et al. proposed fuzzy theory for colored images [20]. Fuzzy theory is
extended to handle the concept of partial truth. In this paper fuzzy models are used as the
distribution of skin and hair color. Shape of face contain (m*n) square cells which are described
as pixels. Each pixel is classified as hair, skin, face and skin-like, hair-like regions are generated.
For face recognition these regions are compared [21].
3.3 TEMPLATE MATCHING METHODS
Template matching methods try to define a face as a function. These methods try to find a
standard template of all the faces. For example, a face can be divided into eyes, face contour,
nose and mouth. Also a face model can be built by edges.
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
27
3.3.1 PREDEFINED TEMPLATES
Sakai et al. [22] used many subtemplates for the eyes, nose, mouth, and face contour to model a
face for face recognition.. Line segments describe each subtemplate. Location of face is found by
contour template. Face recognition is performed by matching the subtemplates.
Tsukamoto et al. presented a model for face pattern [23] in which image is divided in number of
blocks and qualitative features are found for each block. Features in this model are described as
lightness and edgeness. Faceness[24] is calculated at every position of an image by using
template block . Threshold value is defined and if faceness measure is above the threshold then
face is detected.
3.3.2 DEFORMABLE TEMPLATES
Yuille et al. used [25] deformable templates which are described as parameterized templates to
model facial features for example eyes. To link edges in the input image to corresponding
parameters energy function is defined. By minimizing an energy function, face is detected.
Lanitis et al. [26] described a method that gives shape and intensity information. Sampled
contours of images such as the eye boundary, nose, chin, cheek are manually labeled, and to
represent shape vector points are used. The face is deformed to the average shape, and intensity
parameters are extracted. Face recognition is performed which is based on shape and intensity
parameters.
3.4 APPEARANCE BASED METHODS
The templates in appearance-based methods are learned from the examples in the images [4]. In
general, appearance-based methods rely on techniques from statistical analysis and machine
learning to find the relevant characteristics of face images.
3.4.1 FEATURE EXTRACTION
Jian Yang and Delin Chu [1], have proposed the sparse representation-based classifier method.
This method have great potential for face recognition. This paper[1] presents a dimensionality
reduction technique. Sparse Representation Classifier maximizes the ratio of between-class
reconstruction residual to within-class reconstruction residual. This method achieves good result
in face recognition.
3.4.2 EIGEN ANALYSIS
This method represent faces using Principal Component Analysis method .Goal of this eigen
analysis is to represent face as eigen vectors .These eigen vectors are stored in one dimensional
array format and then used to detect faces. In paper [3] the authors proposed a robust approach for
feature extraction for face detection and recognition.
3.4.3 SUPERVISED LEARNING CONCEPT
It is machine learning task of inferring function from labelled training data. Supervised learning
analyzes the training data and produces inferred function, which can be used for mapping new
examples [6].
In [4], the authors proposed a two-phase method for face recognition. The first phase of the
proposed method is to represent the face as a linear combination of all the sample faces. The
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
28
second phase represents the face as a linear combination of nearest neighbours of the face which
is going to be matched with the database.
3.4.4 KERNEL METHODS
Kernel methods[9] are class of algorithms for image analysis . Image analysis is to find extract
the features of faces .Kernel methods contains support vector machine, which use supervised
learning concept that analyze and recognize patterns for classification.
Paper [10] examines the theory of Kernel Fisher Discriminant analysis (KFD) and develops a
two-phase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher Linear
Discriminant Analysis (LDA).
Linear discriminant analysis (LDA) method used in pattern recognition which finds a linear
combination of features . The resulting combination of the features used for reduction of features
used in face recognition. In [10], the theory of kernel Fisher discriminant analysis (KFD) is
examined in a Hilbert space.A Hilbert space is an vector space which posses the structure that
allows length and angle to be measured for the features which are used for face recognition.
3.4.5 CREATION OF TEMPLATE BASED ON SELECTED FEATURES
In [5], a face recognition system using Principal Component Analysis (PCA) with Back
Propagation Neural Networks (BPNN) is proposed which helps to provide an efficient and robust
face recognition [16]. It also focuses on face variations, especially Pose, Expression,Lighting
conditions. The dimensionality of face image is reduced by the PCA and the recognition is done
by the BPNN.
4. CONCLUSION AND FUTURE SCOPE
Face recognition has received substantial attention from researches in biometrics, pattern
recognition field and computer vision communities. Face recognition can be applied in Security
measure at Air ports, Passport verification, Criminals list verification in police department, Visa
processing , Verification of Electoral identification and Card Security measure at ATM’s. In this
paper, we reviewed some of the recent research works on face recognition .We classified face
recognition approaches using knowledge based methods, feature invariant method, template
based methods and appearance based methods.
Our literature review indicates that problem of face recognition is still a challenge having
following issues. These are large database, recognition time and recognition rate.Some of the
important research papers studied and tabular overview is presented in next part of the paper.
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
29
Table. 1 Overview
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
30
REFERENCES
[1] Jian Yang, Member, Lei Zhang, Yong Xu,Jingyu Yang, “Sparse Representation ClassifierSteered
Discriminative Projection With Application to Face Recognition”, IEEE Transactions on Neural
Networks and Learning Systems, Vol. 24, NO. 7, July 2013.
[2] Ran He, Wei-Shi Zheng,Bao-Gang Hu and Xiang-Wei Kong,” Two-Stage Non negative Sparse
Representation for Large-Scale Face Recognition”, IEEE Transactions on Neural Networks and
Learning Systems, Vol. 24, no. 1, January 2013.
[3] S. Zafeiriou, G. Tzimiropoulos, M. Petrou, and T. Stathaki, “Regularized kernel discriminant analysis
with a robust kernel for face recognition and verification,” IEEE Trans. Neural Netw. Learn. Syst.,
vol. 23, no. 3, pp. 526–534, Mar. 2012.
[4] Y. Xu, D. Zhang, J. Yang, and J.-Y. Yang, “A two-phase test sample sparse representation method for
use with face recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 21, no. 9, pp. 1255–1262,
Sep. 2011.
[5] Mohammod Abul Kashem, Md. Nasim Akhter, Shamim Ahmed, and Md. Mahbub Alam,” Face
Recognition System Based on Principal Component Analysis (PCA) with Back Propagation Neural
Networks (BPNN)”, Canadian Journal on Image
[6] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Supervised dictionary learning,” in Proc.
Adv. NIPS, vol. 21. 2009.
[7] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust fac recognition via sparse
representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210–227, Feb. 2009.
[9] J. Yang, A. F. Frangi, Z. Jin, and J.-Y. Yang, “Essence of kernel Fisher discriminant: KPCA plus
LDA,” Pattern Recognit., vol. 37, pp.2097–2100, Oct. 2004.
[10] J. Yang, A. F. Frangi, J.-Y. Yang, D. Zhang, and Z. Jin, “KPCA plus LDA: A complete kernel fisher
discriminant framework for feature extraction and recognition,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 27, no. 2, pp. 230–244, Feb. 2005.
[11] Terence Sim, Member,Simon Baker,and Maan Bsat,” The CMU Pose, Illumination, and Expression
Database,”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 12,
December 2003.
[12] Tae-Kyun Kim,and Josef Kittler,” Locally Linear Discriminant Analysis for Multimodally Distributed
Classes for Face Recognition with a Single Model Image ,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. 27, No. 3, March 2005.
[13] R. He, B.-G. Hu, W.-S. Zheng, and Y. Guo, “Two-stage sparse representation for robust recognition
on large-scale database,” in Proc. 24th AAAI Conf. Artif. Intell., 2010, pp. 475–480.
[14] K. C. Lee, J. Ho, and D. Driegman, “Acquiring linear subspaces for face recognition under variable
lighting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 684–698, May 2005.
[15] P. Belhumeur and D. Kriegman, “What Is the Set of Images of an Object under All Possible Lighting
Conditions,” Int’l J. Computer Vision, vol. 28, pp. 245-260, 1998.
[16] T.Yahagi and H.Takano, (1994) “Face Recognition using neural networks with multiple combinations
of categories,” International Journal of Electronics Information and Communication
Engineering.,vol.J77-D-II, no.11, pp.2151-2159.
[17] G. Yang and T. S. Huang, “Human Face Detection in Complex Background,” Pattern Recognition,
vol. 27, no. 1, pp. 53-63, 1994.
[18] C. Kotropoulos and I. Pitas, “Rule-Based Face Detection in Frontal Views,” Proc. Int’l Conf.
Acoustics, Speech and Signal Processing vol. 4, pp. 2537-2540, 1997
[19] T.K. Leung, M.C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled
Graph Matching,” Proc. Fifth IEEE Int’l Conf. Computer Vision, pp. 637-644, 1995
[20] H. Wu, Q. Chen, and M. Yachida, “Face Detection from Color Images Using a Fuzzy Pattern
Matching Method,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 6, pp. 557-
563,June 1999.
[21] H. Wu, T. Yokoyama, D. Pramadihanto, and M. Yachida, “Face and Facial Feature Extraction from
Color Image,” Proc. Second Int’l Conf. Automatic Face and Gesture Recognition, pp. 345-350, 1996.
[22] T. Sakai, M. Nagao, and S. Fujibayashi, “Line Extraction and Pattern Detection in a Photograph,”
Pattern Recognition, vol. 1, pp. 233-248, 1969.
[23] A. Tsukamoto, C.-W. Lee, and S. Tsuji, “Detection and Tracking of Human Face with Synthesized
Templates,” Proc. First Asian Conf. Computer Vision, pp. 183-186, 1993
Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014
31
[24] A. Tsukamoto, C.-W. Lee, and S. Tsuji, “Detection and Pose Estimation of Human Face with
Synthesized Image Models,” Proc. Int’l Conf. Pattern Recognition, pp. 754-757, 1994.
[25] A. Yuille, P. Hallinan, and D. Cohen, “Feature Extraction from Faces Using Deformable Templates,”
Int’l J. Computer Vision, vol. 8, no. 2, pp. 99-111, 1992.
[26] A. Lanitis, C.J. Taylor, and T.F. Cootes, “An Automatic Face Identification System Using Flexible
Appearance Models,” Image and Vision Computing, vol. 13, no. 5, pp. 393-401, 1995.
[27] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306,
Apr.2006.
Authors
Raviraj V. Mane received BE in Computer Science from Pune University and pursuing
MTech from Symbiosis University.His current research interests are in Neural
Network,Image Processing,Artificial intelligence.
Prof.Poorva Agrawal received ME in Computer science and pursuing PhD from Symbiosis University.Her
research interests are Databases,Soft Computing,Discrete Mathematics.She has 2 years of teaching
experience.
Prof.Nisha Auti received ME in Computer Science. Her research interests are Artificial
intelligence,Neural network,Machine learning. She has 8 years of teaching experience

More Related Content

PDF
Medoid based model for face recognition using eigen and fisher faces
PDF
Aa4102207210
PDF
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
PDF
PERFORMANCE EVALUATION OF STATISTICAL CLASSIFIERS USING INDIAN SIGN LANGUAGE ...
PDF
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
PDF
Happiness Expression Recognition at Different Age Conditions
PDF
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
Medoid based model for face recognition using eigen and fisher faces
Aa4102207210
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
Welcome to International Journal of Engineering Research and Development (IJERD)
PERFORMANCE EVALUATION OF STATISTICAL CLASSIFIERS USING INDIAN SIGN LANGUAGE ...
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Happiness Expression Recognition at Different Age Conditions
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION

What's hot (14)

PDF
Age Invariant Face Recognition using Convolutional Neural Network
PDF
Data Mining Based Skin Pixel Detection Applied On Human Images: A Study Paper
PDF
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...
PDF
Volume 2-issue-6-2108-2113
PDF
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
PDF
50Combining Color Spaces for Human Skin Detection in Color Images using Skin ...
PDF
Fourier mellin transform based face recognition
PDF
Ijetcas14 399
PDF
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
PDF
N348295
PPTX
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
PDF
similarity of inference face matching on angle oriented face recognition
PDF
11.similarity of inference face matching on angle oriented face recognition
Age Invariant Face Recognition using Convolutional Neural Network
Data Mining Based Skin Pixel Detection Applied On Human Images: A Study Paper
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...
Volume 2-issue-6-2108-2113
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
50Combining Color Spaces for Human Skin Detection in Color Images using Skin ...
Fourier mellin transform based face recognition
Ijetcas14 399
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
N348295
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
similarity of inference face matching on angle oriented face recognition
11.similarity of inference face matching on angle oriented face recognition
Ad

Similar to LITERATURE SURVEY ON SPARSE REPRESENTATION FOR NEURAL NETWORK BASED FACE DETECTION AND RECOGNITION (20)

PDF
Real time facial expression analysis using pca
PDF
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
PDF
Review of face detection systems based artificial neural networks algorithms
PDF
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
PDF
Extracted features based multi-class classification of orthodontic images
PDF
A novel approach for performance parameter estimation of face recognition bas...
PDF
An Assimilated Face Recognition System with effective Gender Recognition Rate
PDF
PDF
Ck36515520
PDF
A Robust & Fast Face Detection System
PDF
Paper id 24201475
PDF
Volume 2-issue-6-2108-2113
PDF
NEURAL NETWORK BASED SUPERVISED SELF ORGANIZING MAPS FOR FACE RECOGNITION
PDF
Neural Network based Supervised Self Organizing Maps for Face Recognition
PDF
National Flags Recognition Based on Principal Component Analysis
PDF
Face Images Database Indexing for Person Identification Problem
PDF
A study of techniques for facial detection and expression classification
PDF
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
PDF
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
DOC
K-MEDOIDS CLUSTERING USING PARTITIONING AROUND MEDOIDS FOR PERFORMING FACE R...
Real time facial expression analysis using pca
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
Review of face detection systems based artificial neural networks algorithms
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
Extracted features based multi-class classification of orthodontic images
A novel approach for performance parameter estimation of face recognition bas...
An Assimilated Face Recognition System with effective Gender Recognition Rate
Ck36515520
A Robust & Fast Face Detection System
Paper id 24201475
Volume 2-issue-6-2108-2113
NEURAL NETWORK BASED SUPERVISED SELF ORGANIZING MAPS FOR FACE RECOGNITION
Neural Network based Supervised Self Organizing Maps for Face Recognition
National Flags Recognition Based on Principal Component Analysis
Face Images Database Indexing for Person Identification Problem
A study of techniques for facial detection and expression classification
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
K-MEDOIDS CLUSTERING USING PARTITIONING AROUND MEDOIDS FOR PERFORMING FACE R...
Ad

Recently uploaded (20)

PDF
Computing-Curriculum for Schools in Ghana
PDF
Indian roads congress 037 - 2012 Flexible pavement
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
advance database management system book.pdf
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
PDF
Weekly quiz Compilation Jan -July 25.pdf
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
Trump Administration's workforce development strategy
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
Computing-Curriculum for Schools in Ghana
Indian roads congress 037 - 2012 Flexible pavement
What if we spent less time fighting change, and more time building what’s rig...
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
Unit 4 Computer Architecture Multicore Processor.pptx
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
advance database management system book.pdf
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
Weekly quiz Compilation Jan -July 25.pdf
B.Sc. DS Unit 2 Software Engineering.pptx
LDMMIA Reiki Yoga Finals Review Spring Summer
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
Trump Administration's workforce development strategy
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
FORM 1 BIOLOGY MIND MAPS and their schemes
Introduction to pro and eukaryotes and differences.pptx
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
A powerpoint presentation on the Revised K-10 Science Shaping Paper

LITERATURE SURVEY ON SPARSE REPRESENTATION FOR NEURAL NETWORK BASED FACE DETECTION AND RECOGNITION

  • 1. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 23 LITERATURE SURVEY ON SPARSE REPRESENTATION FOR NEURAL NETWORK BASED FACE DETECTION AND RECOGNITION Raviraj Mane,Poorva Agrawal, Nisha Auti CS Department SIT, Pune ABSTRACT Face detection and recognition is a challenging problem in the field of image processing. In this paper, we reviewed some of the recent research works on face recognition. Issues with the previous face recognition techniques are , time required is more for face recognition , recognition rate and database required to store the data . To overcome these problems sparse representation based classifier technique can be used . KEYWORDS Sparse Representation, Neural Networks, Feature Extraction. 1. INTRODUCTION Face recognition has attracted broad interests in the area of pattern recognition from the past 20 years. Face recognition is a critical issue in today’s world as it is quite easy for a human being, but for the computer it is a difficult task due to high variability among the faces. The process of face recognition involves comparing an image with a database of stored faces in order to identify the individual in that input image. Simultaneously, numerous face representation and classification methods are developed [2] that are based on neural networks. Neural network is an interconnected group of artificial neurons that uses a mathematical model for information processing. In order to reduce the storage requirements and improve the performance [4] of a neural network system the Sparse Representation Classification (SRC) method is used. The basic idea of SRC is to extract the minimum features on a face for face recognition. Therefore it helps in increasing the performance and also reduces the database to store the captured faces. We are motivated to a great extent by the huge research literature in the area of face recognition for addressing various issues by using neural networks. Aim of this paper is to present an extended review on face recognition techniques. This paper is organized as follows: In section II, we present the neural network approach for face recognition. In section III, we present a detailed related research in the area of face recognition to describe various methodologies. Face recognition techniques are divided into four different
  • 2. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 24 broad categories. First category includes knowledge based methods which capture the knowledge of faces. The second category includes various feature invariant methods, the third category includes template matching methods based on creation of templates and the fourth category includes research papers on appearance based methods. In Section IV, we conclude and elaborate the future scope of face recognition. 2. SPARSE REPRESENTATION FOR NEURAL NETWORK APPROACH 2.1 Sparse Representation classifier Algorithm Let there are c pattern classes. Training samples of class i form the matrix Ai =[yi1, yi2, . . . , yiMi] ∈ R^(d×Mi) where Mi is the total number of training samples.y = Aw where y is given test sample. The sparse solution can be obtained by following equation (L0) wˆ 0 = argmin ||w||0, subject to Aw = y (1) The problem in equation (1) is NP hard, so if the solution wˆ 0 is sparse then problem can be solved as (L1) wˆ 1 = argmin ||w||1, subject to Aw = y (2) In the input space training matrix be B = [B1,B2, . . . ,Bc] ∈ R^(N×M), where training samples of class i form the matrix as Bi = [xi1, xi2, . . . , xiMi] ∈R^(N×Mi). Data point xij is mapped into yij = P^T xij under a linear transformation. The matrix can be converted into the one in R^d , A = P^T B. Representation coefficient vector wij can be obtained by solving the optimization problem in (2). With respect to class s,let δs(wi j ) be the representation coefficient vector . v^s ij = Aδs(wij ), s = (1, . . . , c) ,this equation gives the prototype of class s. The distance between yij and class s is defined as ds(yij)=||yij-v^s(ij)||2 To achieve the better performance , between-class distance ds(xij) must be large and the within class distance di(yij) must be small. Within-class scatter can be defined as follows: 1/M ∑ij di(yij) =1/M ∑ij ||yij-v^i(ij)||2. =1/M ∑ij (yij-v^i(ij))^T ((yij-v^i(ij)) =tr(Sw) Between-class scatter can be defined as follows : 1/M(c-1) ∑ij ds(yij)=1/M(c-1) ∑ij )=||yij-v^s(ij)||2=tr(Sb). Where tr represent trace operator. To achieve better result, maximize this function J(p)=tr(Sb)/tr(Sw).
  • 3. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 25 2.2 Steps of algorithm 1.Choose initial projection matrix P=P0 and let k=1. 2.The sparse representation coefficient vector [1] wij is calculated for each training sample in the transformed space. 3.Construct [2] within class scatter matrix(Sw) and between class scatter matrix(Sb) . Calculate the generalized eigen vectors of these two matrices corresponding to the largest eigen value to form Pk. 4. Increment k by 1 and check the following condition. [J(Pk)-J(Pk?1)]/J(Pk) < E(epsilon) 5. Repeat from 2 to 5, until the step 4 gives the correct value. 6. If result of 4 gives correct value, then P*=Pk. The system authenticates the person by comparing the captured image with the stored face images. Sparse representation classifier [1] method used to reduce the dimensionality of the data. The theory of sparse representation helps to choose the training images to improve the performance of a neural network system. Neural network consists of interconnected processing elements called nodes or neurons that work together to produce an output function. Here output function is the face recognition process from the database. There are many algorithms implemented on face recognition but there are some issues which are solved by using sparse representation concept which explained in above algorithm. Some of the issues are performance of face recognition,recognition rate and the database required to store faces. 3. RELATED RESEARCH WORK Face detection methods are classified into following four different broad categories. They are: (i) Knowledge based method (ii) Feature Invariant method (iii) Template matching method (iv) Appearance based method. 3.1 KNOWLEDGE BASED METHODS These are the rule-based methods. They try to capture our knowledge of faces such as symmetric eyes and translate them into a set of rules. For example, rule becomes as, face has two symmetric eyes, and usually eye area is darker than the cheeks.Usually, these rules capture the relationships between features which are selected for face recognition.In following diagram we can see the difference of facial regions.
  • 4. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 26 Yang and Huang developed a system [17] that consists of three levels of rules. At the highest level, all faces are scanned and a set of rules are formed. At level 2 edges are detected. At level 3 set of rules that respond to facial features such as the eyes and mouth are determined and faces are detected. Kotropoulos and Pitas [18] also presented a rule- based method. In this method, facial features are used to locate the boundary of a face. Subsequently, eyes, nose and the mouth detection rules are used to detect the faces. 3.2 FEATURE INVARIANT METHODS There exist features which are invariant over variabilities such as different poses and lighting conditions. Facial features such as eyes, nose, mouth, and hair-line are extracted using edge detectors and a statistical model is built to describe their relationships and to verify the faces. 3.2.1 FACIAL FEATURES Leung et al. developed a method which is based on local feature detectors and random graph matching [19]. Goal of this method is to find face pattern (two eyes, two nostrils, and nose/lip junction). Facial features of the same type such as eyes are selected and their relative distance is computed. Facial template is defined by Gaussian derivative filters which helps to detect faces. Faces are matched by using Gaussian derivative filter response. 3.2.2 MULTIPLE FEATURES Most of the methods utilize the features such as skin colour, hair colour, shape, and nose for face recognition. Yachida et al. proposed fuzzy theory for colored images [20]. Fuzzy theory is extended to handle the concept of partial truth. In this paper fuzzy models are used as the distribution of skin and hair color. Shape of face contain (m*n) square cells which are described as pixels. Each pixel is classified as hair, skin, face and skin-like, hair-like regions are generated. For face recognition these regions are compared [21]. 3.3 TEMPLATE MATCHING METHODS Template matching methods try to define a face as a function. These methods try to find a standard template of all the faces. For example, a face can be divided into eyes, face contour, nose and mouth. Also a face model can be built by edges.
  • 5. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 27 3.3.1 PREDEFINED TEMPLATES Sakai et al. [22] used many subtemplates for the eyes, nose, mouth, and face contour to model a face for face recognition.. Line segments describe each subtemplate. Location of face is found by contour template. Face recognition is performed by matching the subtemplates. Tsukamoto et al. presented a model for face pattern [23] in which image is divided in number of blocks and qualitative features are found for each block. Features in this model are described as lightness and edgeness. Faceness[24] is calculated at every position of an image by using template block . Threshold value is defined and if faceness measure is above the threshold then face is detected. 3.3.2 DEFORMABLE TEMPLATES Yuille et al. used [25] deformable templates which are described as parameterized templates to model facial features for example eyes. To link edges in the input image to corresponding parameters energy function is defined. By minimizing an energy function, face is detected. Lanitis et al. [26] described a method that gives shape and intensity information. Sampled contours of images such as the eye boundary, nose, chin, cheek are manually labeled, and to represent shape vector points are used. The face is deformed to the average shape, and intensity parameters are extracted. Face recognition is performed which is based on shape and intensity parameters. 3.4 APPEARANCE BASED METHODS The templates in appearance-based methods are learned from the examples in the images [4]. In general, appearance-based methods rely on techniques from statistical analysis and machine learning to find the relevant characteristics of face images. 3.4.1 FEATURE EXTRACTION Jian Yang and Delin Chu [1], have proposed the sparse representation-based classifier method. This method have great potential for face recognition. This paper[1] presents a dimensionality reduction technique. Sparse Representation Classifier maximizes the ratio of between-class reconstruction residual to within-class reconstruction residual. This method achieves good result in face recognition. 3.4.2 EIGEN ANALYSIS This method represent faces using Principal Component Analysis method .Goal of this eigen analysis is to represent face as eigen vectors .These eigen vectors are stored in one dimensional array format and then used to detect faces. In paper [3] the authors proposed a robust approach for feature extraction for face detection and recognition. 3.4.3 SUPERVISED LEARNING CONCEPT It is machine learning task of inferring function from labelled training data. Supervised learning analyzes the training data and produces inferred function, which can be used for mapping new examples [6]. In [4], the authors proposed a two-phase method for face recognition. The first phase of the proposed method is to represent the face as a linear combination of all the sample faces. The
  • 6. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 28 second phase represents the face as a linear combination of nearest neighbours of the face which is going to be matched with the database. 3.4.4 KERNEL METHODS Kernel methods[9] are class of algorithms for image analysis . Image analysis is to find extract the features of faces .Kernel methods contains support vector machine, which use supervised learning concept that analyze and recognize patterns for classification. Paper [10] examines the theory of Kernel Fisher Discriminant analysis (KFD) and develops a two-phase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher Linear Discriminant Analysis (LDA). Linear discriminant analysis (LDA) method used in pattern recognition which finds a linear combination of features . The resulting combination of the features used for reduction of features used in face recognition. In [10], the theory of kernel Fisher discriminant analysis (KFD) is examined in a Hilbert space.A Hilbert space is an vector space which posses the structure that allows length and angle to be measured for the features which are used for face recognition. 3.4.5 CREATION OF TEMPLATE BASED ON SELECTED FEATURES In [5], a face recognition system using Principal Component Analysis (PCA) with Back Propagation Neural Networks (BPNN) is proposed which helps to provide an efficient and robust face recognition [16]. It also focuses on face variations, especially Pose, Expression,Lighting conditions. The dimensionality of face image is reduced by the PCA and the recognition is done by the BPNN. 4. CONCLUSION AND FUTURE SCOPE Face recognition has received substantial attention from researches in biometrics, pattern recognition field and computer vision communities. Face recognition can be applied in Security measure at Air ports, Passport verification, Criminals list verification in police department, Visa processing , Verification of Electoral identification and Card Security measure at ATM’s. In this paper, we reviewed some of the recent research works on face recognition .We classified face recognition approaches using knowledge based methods, feature invariant method, template based methods and appearance based methods. Our literature review indicates that problem of face recognition is still a challenge having following issues. These are large database, recognition time and recognition rate.Some of the important research papers studied and tabular overview is presented in next part of the paper.
  • 7. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 29 Table. 1 Overview
  • 8. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 30 REFERENCES [1] Jian Yang, Member, Lei Zhang, Yong Xu,Jingyu Yang, “Sparse Representation ClassifierSteered Discriminative Projection With Application to Face Recognition”, IEEE Transactions on Neural Networks and Learning Systems, Vol. 24, NO. 7, July 2013. [2] Ran He, Wei-Shi Zheng,Bao-Gang Hu and Xiang-Wei Kong,” Two-Stage Non negative Sparse Representation for Large-Scale Face Recognition”, IEEE Transactions on Neural Networks and Learning Systems, Vol. 24, no. 1, January 2013. [3] S. Zafeiriou, G. Tzimiropoulos, M. Petrou, and T. Stathaki, “Regularized kernel discriminant analysis with a robust kernel for face recognition and verification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 3, pp. 526–534, Mar. 2012. [4] Y. Xu, D. Zhang, J. Yang, and J.-Y. Yang, “A two-phase test sample sparse representation method for use with face recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 21, no. 9, pp. 1255–1262, Sep. 2011. [5] Mohammod Abul Kashem, Md. Nasim Akhter, Shamim Ahmed, and Md. Mahbub Alam,” Face Recognition System Based on Principal Component Analysis (PCA) with Back Propagation Neural Networks (BPNN)”, Canadian Journal on Image [6] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Supervised dictionary learning,” in Proc. Adv. NIPS, vol. 21. 2009. [7] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust fac recognition via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210–227, Feb. 2009. [9] J. Yang, A. F. Frangi, Z. Jin, and J.-Y. Yang, “Essence of kernel Fisher discriminant: KPCA plus LDA,” Pattern Recognit., vol. 37, pp.2097–2100, Oct. 2004. [10] J. Yang, A. F. Frangi, J.-Y. Yang, D. Zhang, and Z. Jin, “KPCA plus LDA: A complete kernel fisher discriminant framework for feature extraction and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 2, pp. 230–244, Feb. 2005. [11] Terence Sim, Member,Simon Baker,and Maan Bsat,” The CMU Pose, Illumination, and Expression Database,”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 12, December 2003. [12] Tae-Kyun Kim,and Josef Kittler,” Locally Linear Discriminant Analysis for Multimodally Distributed Classes for Face Recognition with a Single Model Image ,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 3, March 2005. [13] R. He, B.-G. Hu, W.-S. Zheng, and Y. Guo, “Two-stage sparse representation for robust recognition on large-scale database,” in Proc. 24th AAAI Conf. Artif. Intell., 2010, pp. 475–480. [14] K. C. Lee, J. Ho, and D. Driegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 684–698, May 2005. [15] P. Belhumeur and D. Kriegman, “What Is the Set of Images of an Object under All Possible Lighting Conditions,” Int’l J. Computer Vision, vol. 28, pp. 245-260, 1998. [16] T.Yahagi and H.Takano, (1994) “Face Recognition using neural networks with multiple combinations of categories,” International Journal of Electronics Information and Communication Engineering.,vol.J77-D-II, no.11, pp.2151-2159. [17] G. Yang and T. S. Huang, “Human Face Detection in Complex Background,” Pattern Recognition, vol. 27, no. 1, pp. 53-63, 1994. [18] C. Kotropoulos and I. Pitas, “Rule-Based Face Detection in Frontal Views,” Proc. Int’l Conf. Acoustics, Speech and Signal Processing vol. 4, pp. 2537-2540, 1997 [19] T.K. Leung, M.C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching,” Proc. Fifth IEEE Int’l Conf. Computer Vision, pp. 637-644, 1995 [20] H. Wu, Q. Chen, and M. Yachida, “Face Detection from Color Images Using a Fuzzy Pattern Matching Method,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 6, pp. 557- 563,June 1999. [21] H. Wu, T. Yokoyama, D. Pramadihanto, and M. Yachida, “Face and Facial Feature Extraction from Color Image,” Proc. Second Int’l Conf. Automatic Face and Gesture Recognition, pp. 345-350, 1996. [22] T. Sakai, M. Nagao, and S. Fujibayashi, “Line Extraction and Pattern Detection in a Photograph,” Pattern Recognition, vol. 1, pp. 233-248, 1969. [23] A. Tsukamoto, C.-W. Lee, and S. Tsuji, “Detection and Tracking of Human Face with Synthesized Templates,” Proc. First Asian Conf. Computer Vision, pp. 183-186, 1993
  • 9. Circuits and Systems: An International Journal (CSIJ), Vol. 1, No.2, April 2014 31 [24] A. Tsukamoto, C.-W. Lee, and S. Tsuji, “Detection and Pose Estimation of Human Face with Synthesized Image Models,” Proc. Int’l Conf. Pattern Recognition, pp. 754-757, 1994. [25] A. Yuille, P. Hallinan, and D. Cohen, “Feature Extraction from Faces Using Deformable Templates,” Int’l J. Computer Vision, vol. 8, no. 2, pp. 99-111, 1992. [26] A. Lanitis, C.J. Taylor, and T.F. Cootes, “An Automatic Face Identification System Using Flexible Appearance Models,” Image and Vision Computing, vol. 13, no. 5, pp. 393-401, 1995. [27] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr.2006. Authors Raviraj V. Mane received BE in Computer Science from Pune University and pursuing MTech from Symbiosis University.His current research interests are in Neural Network,Image Processing,Artificial intelligence. Prof.Poorva Agrawal received ME in Computer science and pursuing PhD from Symbiosis University.Her research interests are Databases,Soft Computing,Discrete Mathematics.She has 2 years of teaching experience. Prof.Nisha Auti received ME in Computer Science. Her research interests are Artificial intelligence,Neural network,Machine learning. She has 8 years of teaching experience