Karbala International Journal of Modern Science
Recognition of 3D Face using Image Decomposition based on Patterns of Oriented
Edge Magnitude Descriptor
--Manuscript Draft--
Manuscript Number:
Full Title: Recognition of 3D Face using Image Decomposition based on Patterns of Oriented
Edge Magnitude Descriptor
Short Title:
Article Type: Original Study
Keywords:
Corresponding Author: thontadari c, Ph.D
REVA University
INDIA
Corresponding Author Secondary
Information:
Corresponding Author's Institution: REVA University
Corresponding Author's Secondary
Institution:
First Author: thontadari c, Ph.D
First Author Secondary Information:
Order of Authors: thontadari c, Ph.D
Order of Authors Secondary Information:
Manuscript Region of Origin: INDIA
Abstract: Recognition of 3D face has become a booming research domain in industry and
academia. In this article, we pre-sent a novel algorithm for automatic 3D face
recognition, which is robust to facial expression alterations, missing data, and outliers.
The proposed approach consists of three major phases. First, the 3D face scan is
decomposed into structure–texture image via Osher and Vese method aims to illustrate
the conserved structural component. Then, feature vectors are extracted from each
component. Finally, a postprocessing is applied to deal with the outlier embedded in a
feature. The proposed model was tested on two public datasets such as Gavab and
Bosphorus. Experimental test shows that our proposed methods can greatly increases
facial recognition performance compared to relevant state-of-the-art methods.
Suggested Reviewers: Dr. Hemanth K S, Ph. D
Associate Professor, REVA University
[email protected] Dr. Rajeev Ranjan R, Ph.D
Associate Professor, REVA University
[email protected] Dr. Vijalyalakshmi Lepaksi Lepakshi
Associate Professor, REVA University
[email protected] Dr. Ambili P S
Associate Professor, REVA University
[email protected]
Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Author Agreement
Author Agreement Statement
We declare that the manuscript entitled: " Recognition of 3D Face using Image
Decomposition based on Patterns of Oriented Edge Magnitudes Descriptor "
contains original, unpublished results which are not currently being considered for
publication elsewhere while for all reused data the agreement of the publisher is attached.
We confirm that the manuscript has been read and approved by all the authors and that
there are no other persons who satisfied the criteria for authorship but are not listed. We
further confirm that the order of authors listed in the manuscript has been approved by all of
us.
We certify that we have read the Journal's "Publication Ethics" shown on the link:
https://kijoms.uokerbala.edu.iq/home/publication_ethics.html
and we confirm that we follow strictly these "Publication Ethics".
We understand that the corresponding author is the sole contact for the Editorial
process. The corresponding author is responsible for communicating with the other authors
about progress, submissions of revisions and final approval of proofs.
Yours Sincerely
1- Dr. Thontadari C signature
Manuscript Click here to access/download;Manuscript;Recognition of 3D
Face16-3-24.docx
Karbala International Journal of Modern Science, 2022, *, **-**
Recognition of 3D Face using Image Decomposition based
on Patterns of Oriented Edge Magnitudes Descriptor
Thontadari C1
1Associate Professor, School of Computer Science and Applications, Reva University, Bangalore, India.
Email:
[email protected] ABSTRACT
Recognition of 3D face has become a booming research domain in industry and academia. In this article, we pre-sent a
novel algorithm for automatic 3D face recognition, which is robust to facial expression alterations, missing data, and
outliers. The proposed approach consists of three major phases. First, the 3D face scan is decomposed into structure–
texture image via Osher and Vese method aims to illustrate the conserved structural component. Then, feature vectors are
extracted from each component. Finally, a postprocessing is applied to deal with the outlier embedded in a feature. The
proposed model was tested on two public datasets such as Gavab and Bosphorus. Experimental test shows that our pro-
posed methods can greatly increases facial recognition performance compared to relevant state-of-the-art methods.
Keywords: 3D face recognition; Biometrics; Local features; Structure–texture decomposition; Expression
1. Introduction reduction methods and local methods.
Recognition has become a relevant topic as the needs and 1.1. Dimensionality reduction methods
funding for security applications keep growing. That is
Dimensional reduction techniques are vastly used in 3D
why biometrics is attracting more attention today. 2D
face recognition [2-7]. The most popular and widely used
facial recognition approaches use 2D grayscale or color
is principal component analysis (PCA)[8]. Hesher et al.
images to achieve facial recognition. Unfortunately, these
[3]extended the PCA approach using different
2D methods have a few issues like manipulation of head
eigenvectors and different image sizes. In their
orientation and lighting. Varying positions and lighting
experiment, they used 37 subjects, and each has 6 facial
conditions alter the texture information of a face and,
expressions. This method gives the test image a better
therefore, lead to inferior performance on 2D face
chance of matching correctly. Chang et al. [9] applied a
recognition techniques. A 3D face scan represents much
method based on PCA using 2D and 3D images (2.5D
more discriminating facial information than the 2D
image) and using a weighted summation of the two
representation because it provides the actual information
modalities 2D and 3D distances. Yuan [10] apply PCA to
of the human face shape. In 3D, rotations can be rectified,
normalize both 3D shape images and 2D texture
and illumination effects can be eliminated. With the
images. Then, Fuzzy classification and parallel neural
development of 3D scanners and capture techniques,
networks are employed to recognize a face.
many 3D approaches have appeared. These approaches
LDA-based techniques [11] have also been studied for 3D
can be classified into 4 categories: methods based on 2D
face recognition. Kin-Chung et al. [12] proposed a 3D face
approaches, methods based on 3D face geometry, methods
recognition system based on a combination of linear
based on facial segmentation, and methods based on
discriminant analysis (LDA) and linear support vector
detecting points of interest. The method proposed in this
machine (LSVM) by getting local features from multiple
article is part of the category of methods based on 2D
regions as sum invariants. Ten sub-regions and
approaches. So, we examine state of the art in this
subsequent feature vectors are extracted from a single
category by explicitly targeting the most representative
frontal facial image. Linear optimal merge rules based on
methods.
LDA and LSVM offer improvements, but this approach's
Three-dimensional face recognition techniques based on
performance decreases with the increasing severity of
in-depth images are similar to 2D techniques[1]. The only
expressions. In [13], Lu and Jain combine 2D and 3D with
difference is that they use depth images rather than
a weighted vote. The 2D projection image of the 3D model
intensity images. Mostly, these are direct extensions of
is used to construct the LDA subspace for facial
successful techniques with 2D facial images. The methods
recognition.
most applied to face depth images include dimensionality
Copyright © 2022. KIJOMS
2 Recognition of 3D Face using Image Decomposition based on Patterns of Oriented Edge Magnitudes Descriptor
1.2 Local methods images. After PCA is employed to reduce the dimension,
the feature vectors will be input into an AdaBoost
In recent years, several works have been carried out to
classifier that selects the most discriminating features.
apply 2D descriptors initially used in 2D recognition, such
(iii) SIFT: The SIFT descriptor [16] describes an area
as local binary patterns (LBP)[14], Gabor[15], or
around a point of interest. It is invariant to affine
SIFT[16] filters, on depth images [17-20].
transformations, rotations, scale, viewpoints, and changes
(i) LBP: Due to the LBP descriptor's efficiency and
in luminosity. In Huang et al. [20], the face depth images
simplicity for representing 2D faces, the researchers used
are first represented by a group of eLBP (Extended Local
it on in-depth images. Like texture, LBP can describe
Binary Pattern), then the SIFT matching algorithm is
depth information since the depth of a point on the face is
performed on these images for the detection of points of
extremely related to its neighborhood. Many recent works
interest, local feature extraction, and matching. This
have adopted the LBP descriptor for the representation of
method discriminates significantly for frontal faces, but its
depth images [17, 21-27]. Huang et al. [21] proposed
capability of distinction decreases compared to the great
a bimodal face recognition method. The LBP descriptor
variations of pose. Mian et al. [34] describe a multimodal
was used on 2D texter images and depth images to get
2D + 3D face recognition system. In the first place, the 3D
facial features. The authors concluded that LBP enhanced
face pose is adjusted and standardized with its texture
the recognition rate compared to the original depth images
image. For 2D images, the SIFT algorithm is applied to
from the tests performed. Xiong et al. [25] used the same
find local features in faces. The similitude between 2D
strategy, but in another way, they proposed a bimodal face
face images is computed by Euclidean distance of the
recognition approach where the intensity image of the face
descriptors. For 3D face images, first, the face is divided
is combined with its depth image. Then they applied LBP
into two parts: the first part is the eye and forehead area,
on the combined image to extract the facial features,
and the second part is the nose area. Then, these regions
followed by dimension reduction using LDA. Tang et al.
are compared to their equivalents utilizing the ICP
[22, 23] have developed a 3D facial recognition algorithm
algorithm [35]. The final score value is calculated by
using LBP in expression varieties. First, to describe the
merging the scores from each system approach.
human face more accurately and minimize the effect of its
local distortion, a division system 3D facial is offered.
2. Proposed approach
Then the LBP representation is applied to describe the
face. In their approach, facial depth and normal 2.1. Preprocessing
information are obtained and encoded by LBP. The fusion
Figure 1 demonstrates the overview of proposed method.
of the two vectors gives the best recognition rates. An
3D face scans obtained from laser scanners usually
extension to several scales (Multi-Scale LBP, MS-LBP)
contain spikes, holes, and other facial accessories like
was proposed in [26] by Huang et al., Different
clothes and hair. Therefore, proper data preprocessing
combinations of radius and neighborhood parameters
should be effectuated before facial recognition. The
were used
pipeline of our preprocessing algorithm includes these
(ii) Gabor: Similar to LBP, Gabor filters have been widely
procedures: smooth noise, hole filling, nose tip
used recently for 3D face recognition [19, 23, 28-
localization, and face cropping. In this paper, to improve
30].Wang et al. [28] proposed a method based on feature
scans' quality, we apply a 3D face preprocessing tool
fusion, where point signatures and Gabor coefficients are
developed by Szepticki et al.[36] . Holes are detected and
extracted from depth images and grayscale images,
filled by square areas depending on the number of
respectively. These features are projected into subspaces
neighbors of each vertex. Then, a smoothing filter is
using PCA and then merged to construct a feature vector
used to remove the white noise of the 3D face. The tip of
describing the image. In [19], the authors utilize Gabor
the nose is detected by calculating the curvature of the
filters to extract the characteristics of both intensity and
scans. Based on these curvature values, we label each
depth images. These characteristics are selected and
vertex into basic geometric shape classes (elliptical
reduced by a hierarchical scheme integrated with LDA
concave, hyperbolic convex, hyperbolic concave,
and the AdaBoost learning algorithms [31]. Based on the
elliptical convex). Next, the tip of the nose is the
selected depth and intensity characteristics, a classifier is
maximum curvature values in the convex regions (see
built in the AdaBoost learning procedure for face
Figure 2 for more details). The region of interest (ROI) on
recognition. Hiremath and Manjunatha [29], used the
each scan is extracted automatically using a sphere
transformation of Radon [32, 33] on the texture and depth
centered at the tip of the nose, and the pose variations are
images to obtain binary maps to crop the facial region.
Gabor's characteristics are taken from both types of
Copyright © 2022. KIJOMS
Karbala International Journal of Modern Science, 2022, *, **-**
Structure-texture
Feature dimensionality
Training Set Preprocessing image Postprocessing
Extraction reduction
decomposition
Classification
Structure-texture
Feature dimensionality
Testing Set Preprocessing image Postprocessing
Extraction reduction
decomposition
Figure 1 Overview of the proposed method
Figure 2 The pipeline of the location of tip of the nose
corrected using a variant of ICP algorithms [35]. Finally, textural information and 𝑆 contains the geometrical in-
the point clouds are then converted to a range of image formation. In the literature, a very large list of advanced
representation for the next steps. image decomposition models has been proposed, among
most relevant and useful paper, Stark et al.[37], Aujol et
2.2. Structure–texture image decomposition
al.[38], Aujol and Chambolle [39] and Vese and
A lot of data can be separated from an image; this data can Osher[40-42]. This paper uses the Vese and Osher[41]
be the structure, the texture of the image, and the noise. method to decompose the image sequence into the struc-
The goal of this step is to divide an original image of face ture component and the texture/noise component. Their
𝑓into two components: the geometrical information and technique is inspired by Meyer's decomposition model
the textural information. To separate this information, the [43]. Figure 3 shown intermediate result of decomposition
structure–texture image decomposition approach has been with Osher and Vese method. The algorithm of Osher–
proposed. The idea is to split a given Vese is represented:
image 𝑓 into S + T (f = S + T), where 𝑇 contains the
Copyright © 2022. KIJOMS
4 Recognition of 3D Face using Image Decomposition based on Patterns of Oriented Edge Magnitudes Descriptor
Algorithm 1. algorithm of Osher–Vese
Input: The image 𝑓, Iteration number 𝑛𝑏 and a tuning parameter λ
Output: Structure component (𝑆) and texture/noise component (𝑇)
BEGIN
1 𝑓𝑥 1 𝑓𝑦
𝑆 ← 𝑓, 𝑔1 = − , 𝑔2 = −
2λ |∇𝑓| 2λ |∇𝑓|
For 𝑘 = 1 to 𝑛𝑏
For 𝑖 = 2 to 𝑛 − 1
For 𝑗 = 1 to 𝑚 − 1
2 2
𝑆(𝑖 + 1, 𝑗) − 𝑆(𝑖, 𝑗) 𝑆(𝑖, 𝑗 + 1) − 𝑆(𝑖, 𝑗 − 1)
𝑐1 = 1⁄√( ) +( )
ℎ 2ℎ
2 2
𝑆(𝑖, 𝑗) − 𝑆(𝑖 − 1, 𝑗) 𝑆(𝑖 − 1, 𝑗 + 1) − 𝑆(𝑖 − 1, 𝑗 − 1)
𝑐2 = 1⁄√( ) +( )
ℎ 2ℎ
2 2
𝑆(𝑖 + 1, 𝑗) − 𝑆(𝑖 − 1, 𝑗) 𝑆(𝑖, 𝑗 + 1) − 𝑆(𝑖, 𝑗)
𝑐3 = 1⁄√( ) +( )
ℎ 2ℎ
2 2
𝑆(𝑖 + 1, 𝑗 − 1) − 𝑆(𝑖 − 1, 𝑗 − 1) 𝑆(𝑖, 𝑗) − 𝑆(𝑖, 𝑗 − 1)
𝑐4 = 1⁄√( ) +( )
ℎ 2ℎ
1
𝑆 ← (1/ (1 + (𝑐 + 𝑐2 + 𝑐3 + 𝑐4 )))
2λℎ2 1
× [𝑓(𝑖, 𝑗)
1
− (𝑐 𝑆(𝑖 + 1, 𝑗) + 𝑐2 𝑆(𝑖 − 1, 𝑗) + 𝑐3 𝑆(𝑖, 𝑗 + 1) + 𝑐4 𝑆(𝑖, 𝑗 − 1))]
2λℎ2 1
End
End
𝑇 =𝑓−𝑆
End
END
(a) (b) (c)
Figure 3. Decomposition with Osher and Vese method: (a) face
image, (b) structure component, (c) the texture/noise component.
Copyright © 2022. KIJOMS
Karbala International Journal of Modern Science, 2022, *, **-**
2.3. Feature extraction key idea of the Pattern of Oriented Edge Magnitude algo-
rithm is to characterize the appearance of objects through
A good representation of patterns is one of the principal
the relation between the local gradient distribution of the
problems of all pattern recognition systems. The Patterns
adjacent patches. It first extracts the objects' details on a
of Oriented Edge Magnitudes operator[44], combining
small scale, then encodes the information on a larger re-
Local Binary Pattern and Histogram of Oriented Gradient,
gion using the LBP operator. The algorithm of Pattern of
obtained an excellent result for representing a face. The
Oriented Edge Magnitude is represented:
Algorithm 1. Pattern of Oriented Edge Magnitude
Input: 𝐼 : The image 𝐼
Output: Pattern of Oriented Edge Magnitude feature
BEGIN
Step 1: Split image 𝐼 into 𝑛 × 𝑛 non-overlapping Cells with 𝑐1 × 𝑐2 pixels
Step 2: The gradient magnitude 𝑚(𝑥, 𝑦) and orientation 𝜃(𝑥, 𝑦) are calculated for a
pixel 𝐼(𝑥, 𝑦) by the following formulas.
𝑚(𝑥, 𝑦) = √𝑑𝑥 2 + 𝑑𝑦 2 (1)
𝐼(𝑥, 𝑦 + 1) − 𝐼(𝑥, 𝑦 − 1)
𝜃(𝑥, 𝑦) = tan−1 ( ) (2)
𝐼(𝑥 + 1, 𝑦) − 𝐼(𝑥 − 1, 𝑦)
Step 3: Divide the interval (0𝑜 − 180𝑜 ) into 𝑘 sub interval. Then, the characteristic
of each pixel 𝐼(𝑥, 𝑦) in Cell𝑖 is extracted based on the following formula:
𝐻(𝑘)𝑖 = 𝐻(𝑘)𝑖 + 𝑚(𝑥, 𝑦) 𝑖𝑓 𝐼(𝑥, 𝑦) ∈ 𝐶𝑒𝑙𝑙𝑖 𝑎𝑛𝑑 𝜃(𝑥, 𝑦) ∈ 𝑏𝑖𝑛(𝑘) (3)
Step 4: For image 𝐼, the accumulated magnitude image 𝐸𝑖 is computed by
𝐸(𝑥, 𝑦)𝑖 = 𝐻(𝑘)𝑖 (4)
where 𝐸(𝑥, 𝑦)𝑖 is the value of the pixel (𝑥, 𝑦)
Step 5: On the base of magnitude image 𝐸𝑖 , the feature 𝑃𝑂𝐸𝑀(𝑥, 𝑦)𝑖 is
computed by
𝑁−1
2𝑗
𝑗
𝑃𝑂𝐸𝑀(𝑥, 𝑦)𝑖 = ∑ 𝑓 (𝑎𝑏𝑠(𝐸(𝑥, 𝑦)𝑖 − 𝐸(𝑥, 𝑦)𝑖 )) (5)
𝑗=0
𝑗
Where 𝐸(𝑥, 𝑦)𝑖 is the pixel of the 𝑗𝑡ℎ sampling point around (𝑥, 𝑦), and 𝑓(·) is a
function defined by the following formula (6).
1 , 𝑖𝑓 𝜑 ≥ 𝛿
𝑓(𝜑, 𝛿) = { (6)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where 𝛿 is a constant used as a threshold.
Step 6: The final Pattern of Oriented Edge Magnitude feature set at a pixel (x,y) is
calculated by the concatenation of the k orientations features.
𝑃𝑂𝐸𝑀(𝑥, 𝑦) = {𝑃𝑂𝐸𝑀(𝑥, 𝑦)1 , … , 𝑃𝑂𝐸𝑀(𝑥, 𝑦)𝑘 } (7)
END
Copyright © 2022. KIJOMS
Karbala International Journal of Modern Science, 2022, *, **-**
The nearest neighborhood classifier with the Euclidean
distance was used in all experiments.
3.1. GavabDB database
GavabDB contains 549 3D scans of facial surfaces relat-
2.4. Postprocessing
ing to 61 persons (45 male and 16 female). Each subject
Considering the local features extracted by the POEM lo- contributes in this database has been scanned 9 times with
cal descriptor is large, we use Independent Component different positions and expressions. For each person, there
Analysis (ICA)[45] to reduce the dimension of features. are two frontal face scans with neutral expressions, four
The ICA intended to search a linear transformation to the scans with a wide variation in pose, e.g., one looking up
input data with a base also statistically independent as scan (+35°), one looking down scan (-35°), one left profile
much as possible. This means that the value of one com- scan (+90°), and one right profile scan (-90°) and three
ponent does not give any information about the values of frontal scans with expressions which include one with a
other components. The ICA can be seen as a generaliza- smile, one with a laugh, and a random expression liberally
tion of the PCA. In tasks such as facial recognition, much chosen by the subject.
of the valuable information is contained in the images' In our experiment, we skipped these two scans, which are
high order statistics. From another viewpoint, PCA deco- largely rotated (90°). The gallery includes the scan called
rates the inputs with the assumption of Gaussian distribu- "frontal1" for each subject, and the remaining scans are
tion of the data, while the ICA requires that distributing used to form the probe set.
the data be non-Gaussian and maximize the inputs' inde- Following the same experimental convention for most
pendence. works in state of art, we put the first scan of each subject
We used our previous postprocessing framework [46] to in the training set. While the testing set successively con-
address the outlier embedded in a feature. The pipeline of tains the second frontal scan, scans with expressions,
the postprocessing basically contains 3 main parts. 1) scans with a change of position, and all scans except the
First, we decompose features based on multi-dimensional first frontal scan. To evaluate the capability of our method
ensemble empirical mode decomposition (MEEMD)[47] in 3D face recognition, we compared our method with
to come out the possible variabilities variations. 2) Then, state-of-the-art methods, which include Mousavi et
we filter each scale in spatial and frequency domain to ex- al.[50] , Zhang et al. [51] Moreno et al. [52], Li et al.
tract the discriminative and salient information. 3) Finally, [53], Mahoor et al. [54], Drira et al. [55], Huang et al. [20],
we combine and summarize all scales to reconstruct the Berretti et al. [56], Lei et al. [57] and our previous method
filtered features. [58, 59] .
The results in this database are reported in Table 1. The
3. Experiments details proposed approach achieves higher overall (all versus
neutral) compared to other approaches with 97.05%. The
This section will present various experiments conducted
recognition rate for the expression scans and neutral+ex-
to demonstrate our algorithm's effectiveness in face recog-
pression is close to the approach proposed in Abbad et al.
nition. we carry out the experimental tests on two data-
but scans with poses is slightly less than the approach pro-
bases publicly available, namely, GavabDB[48] and
posed in Drira et al. The Cumulative Matching Character-
Bosphorus [49]. In these subsections, we present these
istic (CMC) curve of each category is drawn in Figure 4.
two databases in detail and the results of our experiments.
Table 1. Rank-1 recognition rates on GavabDB.
Neutral looking looking overall
Approaches Neutral Expression
+ Expression up down scans
Moreno et al,2005 [52] 90.16 77.90 - - - -
Mousavi et al.2008 [50] - - 91.00 - - 81.67
Zhang et al. ,2014 [51] - - - - - 92.26
Li et al. 2009 [53] 96.67 93.33 94.68 - - -
Mahoor et al, 2009 [54] - 72.00 78.00 85.30 88.60 -
Huang et al., 2012 [20] 100 93.99 95.49 96.70 96.70 -
Drira et al., 2013 [55] 100 94.54 95.90 100 98.36 96.99
Berretti, et al., 2013 [56] 100 94.00 95.10 96.70 95.1 -
Lei, et al., 2016 [32] 100 95.08 96.31 98.36 98.36 96.99
Copyright © 2022. KIJOMS
Paper Title 7
Abbad et al.,2018 [58] 100 98.90 99.18 - - -
Our method 100 96.72 97.54 98.36 96.72 97.05
- 34 subjects with 31 scans (4 neutral, 10 expressions,
13 poses, and 4 occlusions);
- 71 subjects with up to 54 face scans (one or two neutral
faces, 34 expressions, 13 poses and 4 occlusions).
Following the same experimental protocol as [10]. The
first neutral scan for each subject was selected to form the
gallery (105 in total), while the other scans were organized
according to their type into different classes, which form
the probe (the number of probes per class is stated in table
2). Table 2 presents the first rank recognition rates of the
proposed approach for each subset of the Bosphorus data-
base.
From Table 2 and Figure 5, we can notice that the pro-
posed approach outperforms other approaches on most
subsets. Our approaches achieve the highest performance
on the largest subset of facial expressions LFAU and
UFAU with 99.03% and 100%, respectively. Our pro-
posed method significantly outperforms all the other
methods on the subsets of pose variations, especially on
Figure 4. The CMC curves of the proposed method the subset YR and PR with 89.52% and 100%, respec-
using GavabDB database. tively. Regarding occlusions' subsets, the proposed ap-
proach achieves competitive performance compared to the
3.2. Bosphorus 3D face database best-reported results in [60]. The proposed approach
To further evaluate the proposed algorithm under facial achieves 95.95% as the best rank-one recognition rates on
expression variations, pose, and occlusion in this section, the entire database compared to the other methods.
we apply the Bosphorus database [49]. This dataset in- To summarize, from Table 2, we can deduce that, due to
cludes 4666 facial scans of 105 individuals aged between the capture of significant information, the extraction of de-
25 and 35 years old (60 men and 45 women). We can clas- scriptors, and the postprocessing of characteristics, the
sify the subjects: method we propose has been proven to be more efficient
and robust than the state-of-the-art.
Table 2. Rank-1 recognition rates on Bosphorus.
Our Deng et al.2020 [61]
Smeets et Berretti et al,2013 Li et al. 2011 [60]
Categories method
al. [62] [56] (meshDOG) (HoG+ HoS+ HoGS)
Neutral (105) 100.00 100.00 - 97.90 100.00
Anger (71) 95.77 97.20 - 85.90 88.70
Disgust (69) 79.71 94.20 - 81.20 76.80
Fear (70) 91.43 97.10 - 90.00 92.90
Happy (106) 72.64 96.20 - 92.50 95.30
Sadness (66) 100.00 98.50 - 93.90 95.50
Surprise (71) 97.18 98.60 - 91.50 98.60
Other (18) 100.00 - - 100.00 -
CAU (169) 98.22 - - 95.50 98.80
UFAU (432) 100.00 - - 98.40 99.10
LFAU (1549) 99.03 - - 96.50 97.20
YR (735) 89.52 79.00 - 81.60 78.00
Copyright © 2022. KIJOMS
8 Recognition of 3D Face using Image Decomposition based on Patterns of Oriented Edge Magnitudes Descriptor
PR (419) 100.00 97.90 - 98.30 98.80
CR (211) 86.26 95.30 - 93.40 94.30
O (381) 97.11 97.90 - 93.20 99.20
Neutral vs. All (4561) 95.95 - 93.70 93.40 94.10
Figure 5. The CMC curves of the proposed method using Bosphorus database.
recognition methods have demonstrated our method's ef-
4. Conclusions fectiveness.
In this paper, we have proposed a robust 3D face recogni-
Acknowledgment
tion method based on Structure–texture image decompo-
The author want thank everyone support for the comple-
sition and Pattern of Oriented Edge Magnitude descriptor.
tion of the work carried out.
This method has several properties that make it appropri-
ate for 3D face recognition circumstances. The approach
Conflict of interest
first decomposes the face mesh's depth image into struc-
There is no conflict of interest.
ture and texture. Then local descriptors are extracted
through Pattern of Oriented Edge Magnitude descriptor.
Finally, a Postprocessing is applied to improve the accu-
References
racy during the match. Experiments have been conducted [1] M. Li, B. Huang, G. Tian, A comprehensive survey on 3d
face recognition methods, Engineering Applications of Ar-
on Gavab and Bosphorus dataset. Promising results have
tificial Intelligence. (2022) vol. 110, pp. 104669.
been achieved, and comparisons with other 3D facial https://doi.org/10.1016/j.engappai.2022.104669
Copyright © 2022. KIJOMS
Paper Title 9
[2] D. Marvadi, C. Paunwala, M. Joshi, A. Vora, Comparative [15] C. Liu, H. Wechsler, Gabor feature based classification us-
analysis of 3D face recognition using 2D-PCA and 2D- ing the enhanced fisher linear discriminant model for face
LDA approaches, in Engineering (NUiCONE), 5th recognition, IEEE Transactions on Image processing.
Nirma University International Conference. (2015) pp. 1- (2002) vol. 11, pp. 467-476.
5. https://doi.org/10.1109/NUICONE.2015.7449603. https://doi.org/10.1109/TIP.2002.999679
[3] C. Hesher, A. Srivastava, G. Erlebacher, A novel tech- [16] D. G. Lowe, Distinctive image features from scale-invari-
nique for face recognition using range imaging, in Signal ant keypoints, International journal of computer vision.
processing and its applications. (2003) pp. 201-204. (2004) vol. 60, pp. 91-110.
https://doi.org/10.1109/ISSPA.2003.1224850. https://doi.org/10.1023/B:VISI.0000029664.99615.94
[4] K. Tonchev, A. Manolova, I. Paliy, Comparative analysis [17] S. Z. Li, C. Zhao, M. Ao, Z. Lei, Learning to fuse 3D+ 2D
of 3d face recognition algorithms using range image and based face recognition at both feature and decision levels,
curvature-based representations, in Intelligent Data Acqui- in AMFG. (2005) pp. 44-54.
sition and Advanced Computing Systems (IDAACS) IEEE https://doi.org/10.1007/11564386_5
7th International Conference. (2013) pp. 394-398. [18] Y. Wang, J. Liu, X. Tang, Robust 3D face recognition by
https://doi.org/10.1109/IDAACS.2013.6662714. local shape difference boosting, IEEE Transactions on Pat-
[5] P. Kamencay, R. Hudec, M. Benco, M. Zachariasova, 2D- tern Analysis and Machine Intelligence. (2010) vol. 32, pp.
3D Face Recognition Method Basedon a Modified CCA- 1858-1870.
PCA Algorithm, International Journal of Advanced Ro- https://doi.org/10.1109/TPAMI.2009.200
botic Systems. (2014) vol. 11, p. 36. [19] C. Xu, S. Li, T. Tan, L. Quan, Automatic 3D face recogni-
https://doi.org/10.5772/58251 tion from depth and intensity Gabor features, Pattern
[6] O. Gervei, A. Ayatollahi, N. Gervei, 3D face recognition Recognition. (2009) vol. 42, pp. 1895-1905.
using modified PCA methods, World Academy of Science, https://doi.org/10.1016/j.patcog.2009.01.001
Engineering and Technology. (2010) vol. 39. [20] D. Huang, M. Ardabilian, Y. Wang, L. Chen, 3-D face
https://doi.org/10.5281/zenodo.1061701 recognition using eLBP-based facial description and local
[7] O. Agbolade, A. Nazri, R. Yaakob, A. A. Ghani, Y. K. feature hybrid matching, IEEE Transactions on Infor-
Cheah, 3-Dimensional facial expression recognition in hu- mation Forensics and Security. (2012) vol. 7, pp. 1551-
man using multi-points warping, BMC bioinformatics. 1565. https://doi.org/10.1109/TIFS.2012.2206807
(2019) vol. 20, p. 619. [21] D. Huang, M. Ardabilian, Y. Wang, L. Chen, Automatic
https://doi.org/10.1186/s12859-019-3153-2 asymmetric 3D-2D face recognition, in Pattern Recogni-
[8] M. A. Turk, A. P. Pentland, Face recognition using eigen- tion (ICPR), 20th International Conference. (2010) pp.
faces," in Proceedings. IEEE computer society conference 1225-1228. https://doi.org/10.1109/ICPR.2010.305
on computer vision and pattern recognition. (1991) pp. [22] H. Tang, B. Yin, Y. Sun, Hu, 3D face recognition using
586-591. local binary patterns, Signal Processing. (2013) vol. 93, pp.
https://doi.org/10.1109/CVPR.1991.139758 2190-2198.
[9] K. Chang, K. Bowyer, P. Flynn, Face recognition using 2D https://doi.org/10.1016/j.sigpro.2012.04.002
and 3D facial data, in ACM Workshop on Multimodal [23] X. Wang, Q. Ruan, Y. Ming, 3D face recognition using
User Authentication. (2003) pp. 25-32. corresponding point direction measure and depth local fea-
[10] X. Yuan, J. Lu, T. Yahagi, A method of 3d face recognition tures, in Signal Processing (ICSP), IEEE 10th International
based on principal component analysis algorithm, in Cir- Conference (2010) pp. 86-89.
cuits and Systems. ISCAS IEEE International Symposium. https://doi.org/10.1109/ICOSP.2010.5656654
(2005) pp. 3211-3214. [24] X. Li, Q. Ruan, Y. Jin, G. An, R. Zhao, Fully automatic 3D
https://doi.org/10.1109/ISCAS.2005.1465311 facial expression recognition using polytypic multi-block
[11] K. Etemad, R. Chellappa, Discriminant analysis for recog- local binary patterns. Signal Processing. (2015) vol. 108,
nition of human face images, JOSA A. (1997) vol. 14, pp. pp. 297-308.
1724-1733. https://doi.org/10.1016/j.sigpro.2014.09.033
https://doi.org/10.1364/JOSAA.14.001724 [25] P. Xiong, L. Huang, C. Liu, Real-time 3D face recognition
[12] K.-C. Wong, W.-Y. Lin, Y. H. Hu, N. Boston, X. Zhang, with the integration of depth and intensity images, Image
Optimal linear combination of facial regions for improving Analysis and Recognition, (2011) pp. 222-232.
identification performance, IEEE Transactions on Sys- https://doi.org/10.1007/978-3-642-21596-4_23
tems, Man, and Cybernetics, Part B (Cybernetics). (2007) [26] D. Huang, G. Zhang, M. Ardabilian, Y. Wang, L. Chen,
vol. 37, pp. 1138-1148. 3D face recognition using distinctiveness enhanced facial
https://doi.org/10.1109/TSMCB.2007.895325 representations and local feature hybrid matching, in Bio-
[13] X. Lu, A. K. Jain, Integrating range and texture infor- metrics: Theory Applications and Systems (BTAS),
mation for 3D face recognition, in Application of Com- Fourth IEEE International Conference. (2010) pp. 1-7.
puter Vision, WACV/MOTIONS'05 Volume 1. Seventh https://doi.org/10.1109/BTAS.2010.5634497
IEEE Workshops. (2005) pp. 156-163. [27] L. Shi, X. Wang, Y. Shen, Research on 3D face recogni-
https://doi.org/10.1109/ACVMOT.2005.64 tion method based on LBP and SVM, Optik. (2020) vol.
[14] T. Ahonen, A. Hadid, M. Pietikäinen, Face recognition 220, pp. 165157.
with local binary patterns, Computer vision-eccv. (2004) https://doi.org/10.1016/j.ijleo.2020.165157
pp. 469-481. [28] Y. Wang, C.-S. Chua, Y.-K. Ho, Facial feature detection
https://doi.org/10.1007/978-3-540-24670-1_36 and face recognition from 2D and 3D images, Pattern
Copyright © 2022. KIJOMS
10 Recognition of 3D Face using Image Decomposition based on Patterns of Oriented Edge Magnitudes Descriptor
Recognition Letters. (2002) vol. 23, pp. 1191-1202. Eighth International Symposium on Symbolic and Nu-
https://doi.org/10.1016/S0167-8655(02)00066-1 meric Algorithms for Scientific Computing. (2006) pp.
[29] P. Hiremath, H. Manjunatha, 3D face recognition based 103-110. https://doi.org/10.1109/SYNASC.2006.24
on depth and intensity Gabor features using symbolic [43] Y. Meyer, Oscillating patterns in image processing and
PCA and AdaBoost, 2014. 2 nonlinear evolution equations: the fifteenth Dean Jacquel-
http://dx.doi.org/10.14257/ijsip.2013.6.5.01 ine B. Lewis memorial lectures (2001) vol. 22: American
[30] G. Torkhani, A. Ladgham, A. Sakly, M. N. Mansouri, A Mathematical Soc. ISBN: 978-0-8218-2920-2
3D–2D face recognition method based on extended Gabor [44] N.-S. Vu, A. Caplier, Enhanced patterns of oriented edge
wavelet combining curvature and edge detection, Signal, magnitudes for face recognition and image matching,
Image and Video Processing. (2017) vol. 11, pp. 969-976. IEEE Transactions on Image Processing. (2012) vol. 21,
http://dx.doi.org/10.1007/s11760-016-1046-7 pp. 1352-1365.
[31] Y. Freund, R. E. Schapire, A desicion-theoretic generali- https://doi.org/10.1109/TIP.2011.2166974
zation of on-line learning and an application to boosting, [45] A. Hyvarinen, Fast and robust fixed-point algorithms for
in European conference on computational learning theory. independent component analysis, IEEE transactions on
(1995) pp. 23-37. Neural Networks. (1999) vol. 10, pp. 626-634.
https://doi.org/10.1006/jcss.1997.1504 https://doi.org/10.1109/72.761722
[32] S. Helgason, The Radon Transform on R, Integral Geom- [46] A. Abbad, O. Elharrouss, K. Abbad, H. Tairi, Application
etry and Radon Transforms, ed: Springer. (2011) pp. 1-62. of MEEMD in post-processing of dimensionality reduc-
[33] G. Beylkin, Imaging of discontinuities in the inverse scat- tion methods for face recognition, Iet Biometrics. (2018)
tering problem by inversion of a causal generalized Radon vol. 8, pp. 59-68.
transform, Journal of Mathematical Physics. (1985) vol. https://doi.org/10.1049/iet-bmt.2018.5033
26, pp. 99-108. [47] Z. Wu, N. E. Huang, X. Chen, The multi-dimensional en-
https://doi.org/10.1063/1.526755 semble empirical mode decomposition method, Advances
[34] A. Mian, M. Bennamoun, R. Owens, An efficient multi- in Adaptive Data Analysis. (2009) vol. 1, pp. 339-372.
modal 2D-3D hybrid approach to automatic face recogni- https://doi.org/10.1142/S1793536909000187
tion, IEEE transactions on pattern analysis and machine in- [48] A. B. Moreno and A. Sánchez, GavabDB: a 3D face data-
telligence. (2007) vol. 29. base, in Proc. 2nd COST275 Workshop on Biometrics on
https://doi.org/10.1109/TPAMI.2007.1105 the Internet, Vigo (Spain). (2004) pp. 75-80.
[35] Y. Huang, Y. Wang, T. Tan, Combining Statistics of Geo- [49] N. Alyuz, B. Gokberk, L. Akarun, A 3D face recognition
metrical and Correlative Features for 3D Face Recogni- system for expression and occlusion invariance, in Bio-
tion, (2006) pp. 90.1-90. metrics: Theory, Applications and Systems, BTAS. 2nd
https://doi.org/10.1109/10. 10.5244/C.20.90 IEEE International Conference. (2008) pp. 1-7.
[36] P. Szeptycki, M. Ardabilian, L. Chen, A coarse-to-fine https://doi.org/10.1109/BTAS.2008.4699389
curvature analysis-based rotation invariant 3D face land- [50] M. H. Mousavi, K. Faez, A. Asghari, Three Dimensional
marking, IEEE 3rd International Conference on Biomet- Face Recognition Using SVM Classifier. (2008) pp. 208-
rics: Theory, Applications, and Systems. (2009) pp. 1-6. 213. https://doi.org/10.1109/ICIS.2008.77
https://doi.org/10.1109/BTAS.2009.5339052 [51] L. Zhang, Z. Ding, H. Li, Y. Shen, J. Lu, 3D face recogni-
[37] J.-L. Starck, M. Elad, D. L. Donoho, Image decomposition tion based on multiple keypoint descriptors and sparse rep-
via the combination of sparse representations and a varia- resentation, PLoS One. (2014) vol. 9, p. e100120.
tional approach, IEEE transactions on image processing. https://doi.org/10.1371/journal.pone.0100120
(2005) vol. 14, pp. 1570-1582. [52] A. B. Moreno, A. Sanchez, J. Velez, J. Diaz, Face recog-
https://doi.org/10.1109/TIP.2005.852206 nition using 3D local geometrical features: PCA vs. SVM,
[38] J.-F. Aujol, G. Aubert, L. Blanc-Féraud, and A. Cham- in Image and Signal Processing and Analysis, Proceedings
bolle, Image decomposition into a bounded variation com- of the 4th International Symposium. (2005) pp. 185-190.
ponent and an oscillating component, Journal of Mathe- https://doi.org/10.1109/ISPA.2005.195407
matical Imaging and Vision. (2005) vol. 22, pp. 71-88. [53] X. Li, T. Jia, H. Zhang, Expression-insensitive 3D face
https://doi.org/10.1007/s10851-005-4783-8 recognition using sparse representation, in Computer Vi-
[39] J.-F. Aujol and A. Chambolle, Dual norms and image de- sion and Pattern Recognition, 2009. CVPR 2009. IEEE
composition models, International journal of computer vi- Conference (2009) pp. 2575-2582.
sion. (2005) vol. 63, pp. 85-104. https://doi.org/10.1109/CVPR.2009.5206613
https://doi.org/10.1007/s11263-005-4948-3 [54] M. H. Mahoor, M. Abdel-Mottaleb, Face recognition
[40] L. A. Vese, S. J. Osher, Modeling textures with total vari- based on 3D ridge images obtained from range data, Pat-
ation minimization and oscillating patterns in image pro- tern Recognition. (2009) vol. 42, pp. 445-451.
cessing, Journal of scientific computing. (2003) vol. 19, https://doi.org/10.1016/j.patcog.2008.08.012
pp. 553-572. [55] H. Drira, B. B. Amor, A. Srivastava, M. Daoudi, R. Slama,
https://doi.org/10.1023/A:1025384832106 3D face recognition under expressions, occlusions, and
[41] L. A. Vese, S. J. Osher, Image denoising and decomposi- pose variations, IEEE Transactions on Pattern Analysis
tion with total variation minimization and oscillatory func- and Machine Intelligence. (2013) vol. 35, pp. 2270-2283.
tions, Journal of Mathematical Imaging and Vision,(2004) https://shs.hal.science/halshs-00783066
vol. 20, pp. 7-18. [56] S. Berretti, N. Werghi, A. del Bimbo, P. Pala, Matching
https://doi.org/10.1023/B:JMIV.0000011316.54027.6a 3D face scans using interest points and local histogram de-
[42] L. A. Vese, S. J. Osher, Color texture modeling and color scriptors, Computers & Graphics, (2013) vol. 37, pp. 509-
image decomposition in a variational-PDE approach, 525. https://doi.org/10.1016/j.cag.2013.04.001
Copyright © 2022. KIJOMS
Paper Title 11
[57] Y. Lei, Y. Guo, M. Hayat, M. Bennamoun, X. Zhou, A pression robust 3D face recognition via mesh-based histo-
Two-Phase Weighted Collaborative Representation for 3D grams of multiple order surface differential quantities, in
partial face recognition with single sample, Pattern Recog- Image Processing (ICIP), 18th IEEE International Confer-
nition. (2016) vol. 52, pp. 218-237. ence. (2011) pp. 3053-3056.
https://doi.org/10.1016/j.patcog.2015.09.035 https://doi.org/10.1109/ICIP.2011.6116308
[58] A. Abbad, K. Abbad, H. Tairi, 3D face recognition: Multi- [61] X. Deng, F. Da, H. Shao, Y. Jiang, A multi-scale three-
scale strategy based on geometric and local descriptors, dimensional face recognition approach with sparse repre-
Computers & Electrical Engineering. (2018) vol. 70, pp. sentation-based classifier and fusion of local covariance
525-537. https://doi.org/10.1016/j.compele- descriptors, Computers & Electrical Engineering. (2020)
ceng.2017.08.017 vol. 85, p. 106700.
[59] A. Abbad, K. Abbad, H. Tairi, 3D face recognition in the https://doi.org/10.1016/j.compeleceng.2020.106700
presence of facial expressions based on empirical mode [62] D. Smeets, J. Keustermans, D. Vandermeulen, and P.
decomposition, in Proceedings of the 2nd Mediterranean Suetens, meshSIFT: Local surface features for 3D face
Conference on Pattern Recognition and Artificial Intelli- recognition under expression variations and partial data,
gence. (2018) pp. 1-6. Computer Vision and Image Understanding. (2013) vol.
https://doi.org/10.1145/3177148.3180087 117, pp. 158-169.
[60] H. Li, D. Huang, P. Lemaire, J.-M. Morvan, L. Chen, Ex- https://doi.org/10.1016/j.cviu.2012.10.002
Copyright © 2022. KIJOMS
Title Page
Recognition of 3D Face using Image Decomposition
based on Patterns of Oriented Edge Magnitudes
Descriptor
Thontadari C
Associate Professor, School of Computer Science and Applications, Reva University, Bangalore,
India.
Email:
[email protected] ORCID id: 0000-0001-9422-6972
Cover Letter
Recognition of 3D Face using Image Decomposition
based on Patterns of Oriented Edge Magnitudes
Descriptor
Thontadari C
Associate Professor, School of Computer Science and Applications, Reva University, Bangalore,
India.
Email:
[email protected] This work is authentic and is not currently submitted or reviewed or published in any other
journal.
Ethical Approval
Consent form
I, Dr. Thontadari C give my consent for information about myself/my child or ward/my relative (circle as
appropriate) to be published in Karbala International Journal of Modern Science (KIJOMS)
I understand that the information will be published without my/my child or ward’s/my relative’s (circle as
appropriate) name attached, but that full anonymity cannot be guaranteed.
I understand that the text and any pictures or videos published in the article will be freely available on the
internet and may be seen by the general public. The pictures, videos and text may also appear on other websites
or in print, may be translated into other languages or used for commercial purposes.
I have been offered the opportunity to read the manuscript.
Signing this consent form does not remove my rights to privacy.
Name: Dr. Thontadari C
Date: 16-03-2024
Signed:
Author name: Dr. Thontadari C
Please keep this consent form in the patient’s case files. The manuscript reporting this patient’s details
should state that ‘Written informed consent for publication of their clinical details and/or clinical
images was obtained from the patient/parent/guardian/ relative of the patient. A copy of the consent
form is available for review by the Editor of this journal.
BioMed Central Limited, 236 Gray’s Inn Road, London WC1X 8HL, UK
BioMed Central Limited is part of Springer Science+Business Media. VAT No. GB 823 8263 26 Registered in England and Wales No. 3680030
T +44 (0) 20 3192 2000 F +44 (0) 20 3192 2010 W www.biomedcentral.com E [email protected]