Personality Traits Prediction by Learning PDF
Personality Traits Prediction by Learning PDF
DOI: 10.1007/s11633-017-1085-8
Abstract: Evaluating individuals personality traits and intelligence from their faces plays a crucial role in interpersonal relationship
and important social events such as elections and court sentences. To assess the possible correlations between personality traits
(also measured intelligence) and face images, we first construct a dataset consisting of face photographs, personality measurements,
and intelligence measurements. Then, we build an end-to-end convolutional neural network for prediction of personality traits and
intelligence to investigate whether self-reported personality traits and intelligence can be predicted reliably from a face image. To our
knowledge, it is the first work where deep learning is applied to this problem. Experimental results show the following three points: 1)
“Rule-consciousness” and “Tension” can be reliably predicted from face images. 2) It is difficult, if not impossible, to predict intelligence
from face images, a finding in accord with previous studies. 3) Convolutional neural network (CNN) features outperform traditional
handcrafted features in predicting traits.
Keywords: Personality traits, physiognomy, face image, deep learning, convolutional neural network (CNN).
outperformed the existing systems using low-level mal University Education Training Center, a discrete score
features[23−32] . However, so far as we know, there is no ranging from 1 to 10 was obtained for each personality trait.
such a work that employs CNNs to predict personality Since the subjects of this Chinese questionnaire are the stu-
traits and intelligence from the human face. dents of China, it is suitable for testing personality traits
Our experimental results show that some personality of Chinese people. Based on these sixteen personality fac-
traits can be predicted from face images reliably and may tors, professor Cattell performed a second-order factor anal-
depend largely on genetic qualities, while some others may ysis and acquired the following four second-order factors:
rely on the social environment largely, no reliable prediction adaptation or anxiety, introversion or extroversion, impetu-
is possible by face images for measured intelligence, no evi- ous action or undisturbed intellect, cowardice or resolution.
dent linear correlation between the predicted scores and the The above 20 traits are used to describe the personality of
measured scores of the personality traits and intelligence is each participant in our experiments. Fig. 1 shows an exam-
found in the results of regression experiments. ple of the 20 personality trait scores for one participant.
In summary, our work makes two major contributions. Intelligence measurements. To measure participants
Firstly, we build an end-to-end neural network to extract intelligence, each subject was instructed to fill out a Raven s
features and implement classification or regression to pre- standard progressive matrices questionnaire (SPM)[35]
dict personality traits and measured intelligence from the which was compiled by the British Psychologist Raven in
human face for the first time. Secondly, we construct a 1938. The participants observational ability and ability
dataset for East-Asian race to investigate the correlations to think clearly were primarily measured in this test. To
between personality traits, measured intelligence and face measure the intelligence level of the participant, the total
images. The dataset consists of face photographs, person- score for the right answers was calculated and converted to
ality measurements and intelligence measurements. a percentile score. This intelligence metric comprising of 60
questions is divided into five groups, A, B, C, D and E, with
2 Dataset 12 questions in each group. Fig. 2 shows some examples of
these questions. The difficulty of problem increases grad-
We construct a dataset named as physiognomy dataset ually from the first group to the last group. The internal
to investigate the correlations between personality traits problems within each group are also arranged sequentially
(also measured intelligence) and face images. It consists by difficulty, from easy to difficult. The thought process
of face photographs, personality measurements and intelli- required to complete the questions differs in each group. In
gence measurements. Our dataset is designed for the East- our experiments, 186 participants completed the test and
Asian race, different from the existing works[33] which tar- the percentile scores were used to indicate their intelligence
get Caucasian race. Face photographs of 186 people (94 levels.
men and 92 women) are included in the dataset. The par- Data preprocessing. First, we use the method de-
ticipants were photographed with neutral expression when scribed by Qin and Zhang[36] to detect the facial landmarks
sitting in front of a white background. (including two pupil locations) from each face image. Then,
Ethics statement. This research was approved by the based on the coordinates of these two points, a similarity
Institutional Review Board of the Institute of Automation transformation (a rotation + rescaling in our work) is per-
of the Chinese Academy of Sciences. The participants were formed for all the images in the dataset such that the two
asked to give verbal consent to participate in the research transformed pupils are horizontally positioned with a fixed
and all data were collected after this consent was obtained. distance. An original image and its transformed image are
The consent is thereby documented by the recording of the shown in Figs. 3 (a) and (b), respectively. In addition, for
data. The guidelines of the Ethics Committee, Ministry of the sake of removing the redundant background information
Health of the People s Republic of China state that written in the photographed images, we select the image region to
consent is only required if biological samples are collected, make sure the eyes lie horizontally at the same height and
which was not the case in this study. In addition, the data leave a standard length of neck visible.
for the self-reported personality traits and the measured In order to remove irrelevant information such as the
intelligence were analyzed anonymously. background, hair and clothing, researchers usually crop the
Personality measurements. Cattell sixteen personal- images in facial analysis. In this work, we also cropped the
ity factors (16PF)[34] , a normative self-reported question- sample images using the following steps in Fig. 4. First,
naire that scored 16 personality traits, was used to mea- the two pupils are connected by a line segment AB, then a
sure participants personality traits in our work. The traits downwardly perpendicular segment MC is drawn where MC
measured by 16PF are warmth, reasoning, emotional sta- = 12 AB and M is the midpoint of AB. Finally, the cropped
bility, dominance, liveliness, rule-consciousness, social bold- region is a square with each side = 2AB and centered at C.
ness, sensitivity, vigilance, abstractedness, privateness, ap- Fig. 3 (c) shows an example of a cropped image.
prehension, openness to change, self-reliance, perfection- Remarks. Due to the expensive costs on collecting and
ism and tension. Assessing the responses on the question- labeling samples, Physiognomy dataset is not quite large.
naire by the commercial software developed by Beijing Nor- However, our dataset is larger than other datasets dealing
388 International Journal of Automation and Computing 14(4), August 2017
Fig. 1 Scores of 20 personality traits for one participant. Lengths of bars represent the scores of traits. The higher the score is, the
more salient the trait is. (Color versions of the figures in this paper are available online at http://link.springer.com.)
with the same problem. For the sake of protection of pri- score of each personality trait and the percentile value of the
vacy of the participants in our experiments, physiognomy intelligence as the regression targets. The root mean square
dataset is not publicly released. error (RMSE) is used to evaluate the regression model.
Fig. 5 Architecture of the proposed network. VGG Face∗ represents the network of VGG Face omitting the last fully-connected layer
390 International Journal of Automation and Computing 14(4), August 2017
Inspired by the fact that the performance of a CNN can be as input. We also design a network architecture using
improved by transferring learning[12, 13] , the VGG Face[14] , traditional features as input to conduct comparative ex-
which performs well on the LFW[15] dataset, is utilized as periments. Some handcrafted features have achieved good
part of our proposed network. performance in face identification and verification. Five de-
We design a multi-task neural network to predict all the scriptors are used to represent facial features. Four are local
personality traits and intelligence jointly. The input of the descriptors (histogram of oriented gradients (HOG)[38] , lo-
network is a 224 × 224 × 3 color face image. To adjust cal binary patterns (LBP)[39] , Gabor[40] , and scale-invariant
the network fitting for our goal, we append several fully- feature transform (SIFT)[41] ), and one is a global descriptor
connected layers to the left part of the network. When (GIST[42] ). The concatenation of the above five descriptors
training for regression, the network in Fig. 5 (a) is adopted. is used as our final feature. As shown in Fig. 6, the feature
For classification, the network in Fig. 5 (b) is used. is used as the input of the designed network. A four-layer
In the classification task, all the layers except the last neural network is designed and the classification module
one are shared by all the traits, while trait-dependent lay- and regression module are similar to those in Fig. 5.
ers (one fully-connected layer and one softmax layer for
each trait k, k = 1, · · · , 21) are stacked over them. Dur- 4 Experimental results
ing training, the loss for trait k will only propagate to its
corresponding top fully-connected layer and lower shared In this section, we conduct experiments using Physiog-
layers. More specifically, there are 21 fully-connected lay- nomy dataset and discuss the results. Face images are
ers for 21 traits and each fully-connected layer contains two aligned according to the position of pupils. The input to
neurons. Because each trait is classified into a binary cate- the network shown in Fig. 5 is a fixed-size 224 × 224 × 3
gory, every pair of neurons corresponds to one trait. Each face image, and the average face image computed on the
fully-connected layer is fed to a two-way softmax which pro- training dataset is subtracted in advance.
duces a distribution over the two class labels. The softmax To investigate whether the self-reported personality
loss is used to optimize the network. traits and measured intelligence can be evaluated from the
In the regression task, the network is similar to that for facial features accurately, we conduct classification and re-
classification. The lower layers are shared by all the traits gression experiments respectively. It is noted that Physiog-
and each fully-connected layer contains one neuron. The nomy dataset is not quite large due to the expensive costs
output of each neuron is the predicted score for one person- on gaining information of personality traits and intelligence,
ality trait or intelligence. Euclidean loss is used to measure hence, an N -fold cross-validation scheme is used to estimate
the distance between the predicted values and measured the proposed method. The dataset is randomly divided
ones. into 10 mutually disjoint subsets. Nine subsets are used for
Note that training for the classification task is indepen- training while the remaining subset is for testing. In order
dent of that for the regression task and the experiment for to reduce overfitting, we use label-preserving transforma-
each task is carried out independently. Our method is im- tions (transformation of RGB channels and disturbance) to
plemented with Caffe[37] , which is one of the most pop- artificially enlarge the dataset 20 times. The experiments
ular deep learning frameworks. We train our models us- with and without data augmentation are all conducted, and
ing stochastic gradient descent with momentum of 0.9 and the results are found to be close to each other. Only the
weight decay of 0.004. The weights of fully-connected lay- results with data augmentation are reported here.
ers are initialized from a zero-mean Gaussian distribution We also conduct some comparative experiments to vali-
with standard deviation 0.01. The learning rate is initial- date the effectiveness of the proposed model using the net-
ized with different values according to different tasks. work architecture shown in Fig. 6. The concatenation of the
Network architecture using traditional features five descriptors is used as the input of the network.
4.1 Classification of traits and intelligence reduced ones are shown in the Table 1. Fig. 7 shows the
classification results of the above three types of features for
Table 1 shows the classification results of all the person-
all the traits and intelligence.
ality traits and measured intelligence using different fea-
Table 1 and Fig. 7 show that the original traditional fea-
tures. For results using CNN, the accuracy scores for “Rule-
tures and reduced ones perform comparably. However, the
consciousness” and “Vigilance” far surpass chance levels.
accuracies of classification based on CNN features exceed
The predicted accuracy of “Rule-consciousness” is higher
that based on traditional features. It shows that the neural
than 82 % and the accuracy of “Vigilance” is higher than
network taking face images directly as input outperforms
77 %. The high predicted accuracy scores suggest that these
the network taking traditional features as input.
two personality traits may closely correlate with the facial
characteristics. 4.2 Regression of traits and intelligence
For measured intelligence, the classification accuracy
slightly exceeds the level of chance. Considering the near- In the regression experiments, we directly set the score
chance level predictions on measured intelligence, we may of each personality trait and the percentile value of the in-
conclude that predicting measured intelligence from face telligence as the regression targets.
images is difficult, if not impossible. The RMSE in (1) is used to evaluate the performance of
Psychological researchers have conducted experiments regression.
n
among twins and found that approximately 50 % of a (Xi − Yi )2
human s personality traits are influenced by genetics. Some i=1
RM SE = (1)
personality traits depend largely on genetic qualities, while n
some others mainly depend on the social environment. where Xi is the self-reported score, Yi is the predicted score
Meanwhile, biological studies demonstrate that humans fa- and n is the number of samples.
cial characteristics are determined largely by gene. As a Table 2 is the performance of regression with respect to
result, the traits more dependent on genetic factors may all the personality traits and measured intelligence using
closely correlate with facial features and can be predicted different features. The results using CNN show that the
from the face images more accurately. errors of “Rule-consciousness”, “Openness”, “Perfection-
For results using traditional features, principal compo- ism” and “Tension” are smaller than the errors of other
nent analysis (PCA) is used to conduct the dimension re- personality traits, while “Social boldness”, “Vigilance” and
duction and the results of both the original features and “Introverted or Extroverted” are larger. This indicates that
Table 1 Mean accuracy of the classification results for the 20 traits and intelligence. “1” indicates the results using CNN, “2”
indicates the results using traditional features, and “3” indicates the results using the reduced traditional features by PCA. Best
results are written in bold
Warm Reas Stab Domin Live Cons Soci Sens Vigil Abst Intell
1 55.62 57.87 67.98 64.61 73.03 82.02 55.06 53.93 77.53 53.37 57.30
2 52.81 55.62 64.04 58.99 65.17 79.08 46.57 44.94 72.84 47.75 52.81
3 50.99 51.69 56.74 59.55 67.42 79.21 49.44 48.31 71.91 45.51 52.25
Priv Appr Open Reli Perf Tens Adap Intro Impet Cowa
1 59.55 61.80 57.30 58.99 60.11 60.67 58.43 69.10 55.06 54.49
2 56.18 55.06 52.81 53.93 53.37 56.18 55.05 63.48 48.31 52.25
3 57.87 55.62 49.44 51.69 54.92 50.56 56.18 64.04 48.88 49.44
Table 2 RMSE of the regression results for the 20 traits and intelligence. “1” indicates the results using CNN, “2” indicates the
results using traditional features, and “3” indicates the results using the reduced traditional features by PCA. Best results are written
in bold
Warm Reas Stab Domin Live Cons Soci Sens Vigil Abst Intell
1 1.991 4 1.524 8 1.802 9 1.568 6 1.898 6 1.439 2 2.037 1 1.496 5 2.114 1 1.670 1 0.194 4
2 2.183 1 1.592 7 1.881 5 1.710 1 2.049 2 1.561 3 2.151 9 1.621 9 2.265 2 1.764 7 0.208 2
3 2.121 3 1.801 0 2.017 1 1.778 2 2.075 5 1.611 5 2.360 9 1.703 7 2.345 2 1.952 6 0.343 2
Priv Appr Open Reli Perf Tens Adap Intro Impet Cowa
1 1.547 0 1.858 8 1.361 0 1.591 1 1.368 1 1.404 2 1.732 9 2.117 5 1.622 7 1.533 3
2 1.675 6 1.966 0 1.438 2 1.623 5 1.445 4 1.503 4 1.839 3 2.219 7 1.766 7 1.679 7
3 1.714 9 1.994 3 1.554 4 1.888 5 1.554 1 1.604 5 1.861 3 2.322 0 1.946 0 1.786 5
392 International Journal of Automation and Computing 14(4), August 2017
Fig. 7 Classification results by using the CNN features and traditional features for all the personality traits as well as measured
intelligence
little relationship exists between the latter three personality we build a new dataset to further assess our experimental
traits and face images. The fitting error of intelligence is observations. As shown in Fig. 9, the dataset contains two
somewhat high, which indicates that it is difficult to predict groups of frontal face images with neutral expression col-
a person s intelligence score from the face image accurately. lected from the Internet. The categories of the two groups
People generally do not favor a clear-cut choice, e.g., peo- are entertainers and teachers respectively. In order to in-
ple often remark that he or she is a bit controlling or some- vestigate whether people with similar social behaviors share
what sensitive. Because physiognomy dataset is not quite similar personality traits, the proposed model is used to
large, and most scores for the personality traits are around conduct the experiment. Physiognomy dataset is used to
median, the regression models trained on our dataset are train the network, and the newly constructed dataset is to
more suitable for the prediction of the median scores. test.
Fig. 8 shows the regression results of the above three The experimental results show that samples in the same
types of features for all the personality traits and measured category share several similar personality traits. The pre-
intelligence. Table 2 and Fig. 8 show that the results of orig- diction results of entertainers show that they get high scores
inal traditional features and reduced ones are comparable. in “Reasoning” and “Liveliness”, while low scores in “Emo-
However, the errors of regression model based on CNN fea- tional Stability”. They usually learn fast and react quickly,
tures are smaller than those based on traditional features. which verifies the high scores in “Reasoning”. The re-
It shows that features directly extracted from the face im- sults in “Emotional stability” and “Liveliness” indicate that
ages using CNN outperforms the traditional handcrafted they are emotionally less stable, animated and enthusiastic,
features. which exactly reflect the personality traits of the entertain-
ers in the dataset. Teachers get high scores in “Liveliness”
4.3 Application of predicting personality and “Extrovert” in the prediction results. They are lively,
traits from internet images expressive and good at communicating with students, which
verifies the results.
In the above sections, we used CNN to investigate The above analyses show that there exists some kind of
whether personality traits and intelligence could be pre- correlation between the predicted personality traits and the
dicted from the human face. The regression and classifi- real characters. Especially for people in the same group, the
cation experimental results show that, certain personality proposed model seems able to verify their general charac-
traits could be predicted with high reliability. Therefore, ters.
T. Zhang et al. / Physiognomy: Personality Traits Prediction by Learning 393
Fig. 8 Regression results by using the CNN features and traditional features for all the personality traits as well as measured intelligence
Fig. 9 Two groups of face images. From top to down: entertainers and teachers.
the National Academy of Sciences of the United States of [18] Z. Y. Zhu, P. Luo, X. G. Wang, X. O. Tang. Deep learning
America, vol. 104, no. 46, pp. 17948–17953, 2007. identity-preserving face space. In Proceedings of IEEE In-
ternational Conference on Computer Vision, IEEE, Sydney,
[4] A. C. Little, R. P. Burriss, B. C. Jones, S. C. Roberts. Facial
Australia, pp. 113–120, 2013.
appearance affects voting decisions. Evolution and Human
Behavior, vol. 28, no. 1, pp. 18–27, 2007. [19] Y. Sun, Y. H. Chen, X. G. Wang, X. O. Tang. Deep learning
face representation by joint identification-verification. In
[5] I. V. Blair, C. M. Judd, K. M. Chapleau. The influence of Proceedings of the 27th International Conference on Neural
afrocentric facial features in criminal sentencing. Psycho- Information Processing Systems, NIPS, Montréal, Canada,
logical Science, vol. 15, no. 10, pp. 674–679, 2004. pp. 1988–1996, 2014.
[6] D. R. Carney, C. R. Colvin, J. A. Hall. A thin slice per- [20] Y. Taigman, M. Yang, M. A. Ranzato, L. Wolf. Web-
spective on the accuracy of first impressions. Journal of Re- scale training for face identification. In Proceedings of IEEE
search in Personality, vol. 41, no. 5, pp. 1054–1072, 2007. Conference on Computer Vision and Pattern Recognition,
IEEE, Boston, USA, pp. 2746–2754, 2015.
[7] R. S. S. Kramer, J. E. King, R. Ward. Identifying personal-
ity from the static, nonexpressive face in humans and chim- [21] T. Zhang, Q. L. Dong, Z. Y. Hu. Pursuing face identity
panzees: Evidence of a shared system for signaling personal- from view-specific representation to view-invariant repre-
ity. Evolution and Human Behavior, vol. 32, no. 3, pp. 179– sentation. In Proceedings of IEEE International Conference
185, 2011. on Image Processing, IEEE, Phoenix, USA, pp. 3244–3248,
2016.
[8] Q. M. Rojas, D. Masip, A. Todorov, J. Vitriä. Automatic
[22] B. Zhao, J. S. Feng, X. Wu, S. C. Yan. A survey on deep
point-based facial trait judgments evaluation. In Proceed-
learning-based fine-grained object classification and seman-
ings of IEEE Conference on Computer Vision and Pattern
tic segmentation. International Journal of Automation and
Recognition, IEEE, San Francisco, USA, pp. 2715–2720,
Computing, vol. 14, no. 2, pp. 1–17, 2017.
2010.
[23] N. Kumar, A. C. Berg, P. N. Belhumeur, S. K. Nayar. At-
[9] Q. M. Rojas, D. Masip, A. Todorov, J. Vitria. Auto- tribute and simile classifiers for face verification. In Pro-
matic prediction of facial trait judgments: Appearance vs. ceedings of IEEE International Conference on Computer
structural models. PLoS One, vol. 6, no. 8, Article number Vision, IEEE, Kyoto, Japan, pp. 365–372, 2009.
e23323, 2011.
[24] Y. Taigman, L. Wolf, T. Hassner. Multiple one-shots for
[10] K. Wolffhechel, J. Fagertun, U. P. Jacobsen, W. Majew- utilizing class label information. In Proceedings of British
ski, A. S. Hemmingsen, C. L. Larsen, S. K. Lorentzen, H. Machine Vision Conference, London, UK, vol. 2, pp. 1–12,
Jarmer. Interpretation of appearance: The effect of facial 2009.
features on first impressions and personality. PLoS One,
[25] M. Guillaumin, J. Verbeek, C. Schmid. Is that you? Metric
vol. 9, no. 9, Article number e107721, 2014.
learning approaches for face identification. In Proceedings of
[11] K. Kleisner, V. Chvátalová, J. Flegr. Perceived intelligence IEEE International Conference on Computer Vision, IEEE,
is associated with measured intelligence in men but not Kyoto, Japan, pp. 498–505, 2009.
women. PLoS One, vol. 9, no. 3, Article number e81237, [26] Q. Yin, X. O. Tang, J. Sun. An associate-predict model
2014. for face recognition. In Proceedings of IEEE Conference on
[12] J. Yosinski, J. Clune, Y. Bengio, H. Lipson. How transfer- Computer Vision and Pattern Recognition, IEEE, Colorado
able are features in deep neural networks? In Proceedings Springs, USA, pp. 497–504, 2011.
of the 27th International Conference on Neural Information [27] C. Huang, S. H. Zhu, K. Yu. Large scale strongly super-
Processing Systems, NIPS, Montréal, Canada, pp. 3320– vised ensemble metric learning, with applications to face
3328, 2014. verification and retrieval. arXiv:1212.6094, 2012.
[13] M. S. Long, Y. Cao, J. M. Wang, M. I. Jordan. Learn- [28] D. Chen, X. D. Cao, L. W. Wang, F. Wen, J. Sun. Bayesian
ing transferable features with deep adaptation networks. In face revisited: A joint formulation. In Proceedings of the
Proceedings of the 32nd International Conference on Ma- 12th European Conference on Computer Vision, Florence,
chine Learning, JMLR, Lille, France, 2015. Italy, pp. 566–579, 2012.
[14] O. M. Parkhi, A. Vedaldi, A. Zisserman. Deep face recogni- [29] T. Berg, P. N. Belhumeur. Tom-vs-pete classifiers and
tion. In Proceedings of British Machine Vision Conference, identity-preserving alignment for face verification. In Pro-
Swansea, UK, vol. 41, pp. 1–12, 2015. ceedings of British Machine Vision Conference, Guildford,
UK, vol. 129, pp. 1–11, 2012.
[15] G. B. Huang, M. Ramesh, T. Berg, E. Learned-Miller. La-
[30] D. Chen, X. D. Cao, F. Wen, J. Sun. Blessing of dimension-
beled faces in the wild: A database for studying face recog-
ality: High-dimensional feature and its efficient compression
nition in unconstrained environments. Technical Report 07-
for face verification. In Proceedings of IEEE Conference on
49, University of Massachusetts, USA, 2007.
Computer Vision and Pattern Recognition, IEEE, Portland,
[16] Y. Taigman, M. Yang, M. A. Ranzato, L. Wolf. Deepface: USA, pp. 3025–3032, 2013.
Closing the gap to human-level performance in face veri- [31] X. D. Cao, D. Wipf, F. Wen, G. Q. Duan, J. Sun. A practi-
fication. In Proceedings of IEEE Conference on Computer cal transfer learning algorithm for face verification. In Pro-
Vision and Pattern Recognition, IEEE, Columbus, USA, ceedings of IEEE International Conference on Computer
pp. 1701–1708, 2014. Vision, IEEE, Barcelona, Spain, pp. 3208–3215, 2013.
[17] Y. Sun, X. G. Wang, X. O. Tang. Deep learning face repre- [32] F. K. Zaman, A. A. Shafie, Y. M. Mustafah. Robust face
sentation from predicting 10,000 classes. In Proceedings of recognition against expressions and partial occlusions. In-
IEEE Conference on Computer Vision and Pattern Recog- ternational Journal of Automation and Computing, vol. 13,
nition, IEEE, Columbus, USA, pp. 1891–1898, 2014. no. 4, pp. 319–337, 2016.
T. Zhang et al. / Physiognomy: Personality Traits Prediction by Learning 395
[33] N. N. Oosterhof, A. Todorov. The functional basis of face Qiu-Lei Dong received the B. Sc. degree
evaluation. Proceedings of the National Academy of Sci- in automation from the Northeastern Uni-
ences of the United States of America, vol. 105, no. 32, versity, China in 2003, received the Ph. D.
pp. 11087–11092, 2008. degree from the Institute of Automation,
Chinese Academy of Sciences, China in
[34] S. Karson. A Guide to the Clinical Use of the 16 pf, Savoy, 2008. Currently, he is a professor in the Na-
USA: Institute for Personality and Ability Testing, 1976. tional Laboratory of Pattern Recognition,
[35] R. M. Kaplan, D. P. Saccuzzo. Psychological Testing: Prin- Institute of Automation, Chinese Academy
ciples, Applications, and Issues, Boston, USA: Wadsworth of Sciences, China.
Publishing, 2012. His research interests include motion analysis, 3D computer
vision and pattern classification.
[36] R. Z. Qin, T. Zhang. Shape initialization without ground E-mail: [email protected] (Corresponding author)
truth for face alignment. In Proceedings of IEEE Interna- ORCID iD: 0000-0003-4015-1615
tional Conference on Acoustics, Speech and Signal Process-
ing, IEEE, Shanghai, China, pp. 1278–1282, 2016.
Wei Gao received the B. Sc. degree in
[37] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. computational mathematics, the M. Sc. de-
Girshick, S. Guadarrama, T. Darrell. Caffe: Convolutional gree in pattern recognition and intelligent
architecture for fast feature embedding. In Proceedings of system from Shanxi University, China, and
ACM International Conference on Multimedia, ACM, Or- the Ph. D. degree in pattern recognition
lando, USA, 2014. and intelligent system from Institute of Au-
[38] N. Dalal, B. Triggs. Histograms of oriented gradients for hu- tomation, Chinese Academy of Sciences,
man detection. In Proceedings of IEEE Conference on Com- China in 2002, 2005 and 2008, respectively.
puter Vision and Pattern Recognition, IEEE, San Diego, Since July 2008, he has joined Robot Vision
USA, pp. 886–893, 2005. Group of National Laboratory of Pattern Recognition where he
is currently an associate professor.
[39] T. Ahonen, A. Hadid, M. Pietikainen. Face description with
His research interests include 3D reconstruction from images
local binary patterns: Application to face recognition. IEEE
and SLAM technology.
Transactions on Pattern Analysis and Machine Intelligence,
E-mail: [email protected]
vol. 28, no. 12, pp. 2037–2041, 2006.
[40] C. J. Liu, H. Wechsler. Gabor feature based classification
using the enhanced fisher linear discriminant model for Hua-Rong Xu received the M. Sc. and
face recognition. IEEE Transactions on Image Processing, the Ph. D. degrees in computer science from
vol. 11, no. 4, pp. 467–476, 2002. Xiamen University, China in 2003 and 2011,
respectively. Now he is a professor of Xi-
[41] D. G. Lowe. Distinctive image features from scale-invariant amen Institute of Technology, China. He
keypoints. International Journal of Computer Vision, has worked on computer vision and pattern
vol. 60, no. 2, pp. 91–110, 2004. recognition.
[42] A. Oliva, A. Torralba. Modeling the shape of the scene: His research interests include 3D com-
A holistic representation of the spatial envelope. Interna- puter vision and driverless-navigation.
tional Journal of Computer Vision, vol. 42, no. 3, pp. 145– E-mail: [email protected]
175, 2001.
Zhan-Yi Hu received the B. Sc. degree
in automation from the North China Uni-
Ting Zhang received the B. Sc. degree versity of Technology, China in 1985, and
in communication engineering from Beijing received the Ph. D. degree in computer vi-
Jiaotong University, China in 2013. She sion from the University of Liege, Belgium,
is currently a Ph. D. degree candidate in in 1993. Since 1993, he has been with the
the National Laboratory of Pattern Recog- National Laboratory of Pattern Recogni-
nition, Institute of Automation, Chinese tion at Institute of Automation, Chinese
Academy of Sciences, China. Academy of Sciences, China. From 1997
Her research interests include deep learn- to 1998, he was a visiting scientist with the Chinese University
ing and face recognition. of Hong Kong, China. From 2001 to 2005, he was an executive
E-mail: [email protected] panel member with the National High-Tech Research and Devel-
ORCID iD: 0000-0001-9145-5913 opment Program (863 Program). From 2005 to 2010, he was a
member of the Advisory Committee, National Natural Science
Ri-Zhen Qin received the B. Sc. degree Foundation of China. He is currently a research professor of
in automation from the Xidian University, computer vision, the deputy editor-in-chief of the Chinese Jour-
China in 2013, received the M. Sc. degree nal of CAD and CG, and an associate editor of Science China,
from the National Laboratory of Pattern and Journal of Computer Science and Technology. He was the
Recognition, Institute of Automation, Chi- Organization Committee Co-Chair of the ICCV2005, and the
nese Academy of Sciences, China in 2016. Program Co-Chair of the ACCV2012.
His research interests include machine His research interests include biology-inspired vision and large
learning and face recognition. scale 3D reconstruction from images.
E-mail: [email protected] E-mail: [email protected]