0% found this document useful (0 votes)
117 views8 pages

Personal Verification Using Palmprint An PDF

1. The document presents a new approach for personal identification using hand images that extracts both palmprint and hand geometry features from a single image taken by a digital camera. 2. It describes prior work on palmprint identification using features like ridges, creases and minutiae. Previous work on hand geometry identification using measurements of the hand area and finger lengths is also discussed. 3. The proposed system automatically extracts palmprint and hand geometry features from each hand image. It fuses the features using two methods, finding that decision level fusion performed better for identification. Experimental results on 100 users confirm the utility of combining the two biometric features.

Uploaded by

Ruchith Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views8 pages

Personal Verification Using Palmprint An PDF

1. The document presents a new approach for personal identification using hand images that extracts both palmprint and hand geometry features from a single image taken by a digital camera. 2. It describes prior work on palmprint identification using features like ridges, creases and minutiae. Previous work on hand geometry identification using measurements of the hand area and finger lengths is also discussed. 3. The proposed system automatically extracts palmprint and hand geometry features from each hand image. It fuses the features using two methods, finding that decision level fusion performed better for identification. Experimental results on 100 users confirm the utility of combining the two biometric features.

Uploaded by

Ruchith Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Personal Verification using Palmprint and Hand

Geometry Biometric

Ajay Kumar1, David C. M. Wong 1, Helen C. Shen1 , Anil K. Jain 2


1 Department of Computer Science, Hong Kong University of Science and Technology,
Clear Water Bay, Hong Kong.
{ajaykr, csdavid, helens}@cs.ust.hk
2 Pattern Recognition and Image Processing Lab, Department of Computer Science and
Engineering, Michigan State University, East Lansing, MI 48824.
[email protected]

Abstract. A new approach for the personal identification using hand images is pre sented. This
paper attempts to improve the performance of palmprint-based verification system by
integrating hand geometry features. Unlike other bimodal biometric systems, the users do not
have to undergo the inconvenience of using two different sensors since the palmprint and hand
geometry features can be acquired from the same image, using a digital camera, at the same
time. Each of these gray level images are aligned and then used to extract palm print and hand
geometry features. These features are then examined for their individual and com bined
performances. The image acquisition setup used in this work is inherently simple and it does
not employ any special illumination nor does it use any pegs to cause any inconve nience to the
users. Our experimental results on the im age dataset from 100 users confirm the utility of
combining hand geometry features with those from palmprints using a simple image acquis ition
setup.

1 Introduction
Reliability in the personal authentication is key to the security in the networked society.
Many physiological characteristics of humans , i.e,. biometrics , are typically time invariant,
easy to acquire, and unique for every individual. Biometric features such as face, iris,
fingerprint, hand geometry, palmprint, signature, etc. have been suggested for the security
in access control. Most of the current research in biometrics has been focused on fingerprint
and face [1]. The reliability of personal identification using face is currently low as the
researchers today continue to grapple with the problems of pose, lighting, orientation and
gesture [2]. Fingerprint identification is widely used in personal identific ation as it works
well in most cases. However, it is difficult to acquire fingerprint features i.e. minutiae, for
some class of persons such as manual laborers, elderly people, etc.. As a result, other
biometric characteristics are receiving increasing attention. Moreover, additional biometric
features, such as palmprints, can be easily integrated with the existing authentication system
to provide enhanced level of confidence in personal authentic ation.

1.1 Prior work

Two kinds of biometric indicators can be extracted from the low-resolution† hand images;
(i) palmprint features, which are composed of principal lines, wrinkles, minutiae, delta
points, etc., and (ii) hand geometry features which include area/size of palm, length and
width of fingers. The problem of personal verification using palmprint features has drawn
considerable attention and researchers have proposed various methods [3]-[15]. One
popular approach considers palmprints as textured images which are unique to every
individual. Therefore , analysis of palmprint images using Gabor filters [3], wavelets [4],
Fourier transform [5], and local texture energy [6] has been proposed in the literature. As
compared to fingerprint s, palmprints have a large number of creases. Wu et al. [7] have
characterized these creases by directional line energy features and used them for palmprint
identification. The endpoints of some prominent principal lines , i.e., the heart-line, head-
line, and life -line are rotation invariant. Some authors [8]-[9] have used these endpoints

† High resolution hand images, of the order of 500 dpi, can also be used to extract fingerprint fe atures.
However, the database for such images will require large storage and computational requirements.
and midpoints for the registration of geometrical and structural features of principal lines
for palmprint matching. Duta et al. [10] have suggested that the connectivity of extracted
palm lines is not important. Therefore, they have used a set of feature points along the
prominent palm lines, instead of extracted palm lines as in [9], to generate the matching
score for palmprint authentication. The palmprint pattern also contains ridges and minutiae,
similar to a fingerprint pattern. However, in palmprints the creases and ridges often overlap
and cross each other. Therefore, Funda et al. [11] have suggested the extraction of local
palmprint features , i.e., ridges by eliminating the creases. However, this work [11] is only
limited to the extraction of ridges, and does not go beyond its usage to support any success
of these extracted ridges in the identification of palmprints. Chen et al. [12] have attempted
to estimate palmprint crease points by generating a local gray level directional map. These
crease points are connected together to isolate the crease in the form of line segments,
which are used in the matching process. No details are provided in [12] to suggest the
robustness of these partially extracted creases for the matching of palmprints. Some related
work on palmprint verification also appears in [13] and [14]. A recent paper by Han et al.
[15] uses morphological and Sobel edge features to characterize palmprints and trained a
neural network classifier for their verification.
The palmrint authentication methods in [5]-[12] utilize inked palmprint images while
the recent work in [4] and [15] have shown the utility of inkless palmprint images acquired
from the digital scanner. However, some promising results on the palmprint images
acquired from image acquisition systems using CCD based digital camera appear in [3] and
[13].
The US patent office has issued several patents [16] -[19] for devices that measure
hand geometry features for personal verification. Some related work using low-resolution
digital hand images appears in [20] and [21]. Thes e authors have used fixation pegs to
restrict the hand movement and shown promising results. However, the results in [20]-[21]
may be biased by the small size of the database and an imposter can eas ily violate the
integrity of system by using fake hand [22].

1.2 Proposed system

The palmprint and hand geometry images can be extracted from a hand image in a single
shot at the same time. Unlike other multibiometrics systems (e.g., face and fingerprint [23],
voice and face [24], etc.), a user does not have to undergo the inconvenience of passing
through multiple sensors. Furthermore, the fraud associated with fake hand, in hand
geometry based verification system, can be alleviated with the integration of palmprint
features. This paper presents a new method of personal authentication using palmprint and
hand geometry features that are simultaneously acquired from a single hand image.
The block diagram of the proposed verification system is shown in figure 1. Hand images

Binary Image Erosion


Image Acquisition Gray level Image Palmprint
and location of center Segmentation
using Digital Camera Rotation
of ROI

Binarization and
Binary Image Extraction of Hand Extraction of
estimation of Image Rotation Geometry Features Palmprint Features
Orientation

Fusion by
Representation

Classifier

Gennuine / Imposter

Classifier Classifier

Decision by Fusion

Gennuine / Imposter

Figure 1: Block diagram of the personal verification Figure 2: Acquisition of a typical image sam ple
system using palmprint and h and geometry using digital camera.

of every user are used to automatically extract the palmprint and hand geometry features.
This is achieved by first thresholding the images acquired from the digital camera. The
resultant binary image is used to estimate the orientation of hand since in absence of pegs
user does not necessarily align their hand in a preferred direction. The rotated binary image
is used to compute hand geometry features. This image also serves to estimate the center of
palmprint from the residue of morph ological erosion with a known structuring element
(SE). This center point is used to extract the palmprint image of a fixed size, from the
rotated gray level hand images. Each of these palmprint images are used to extract salient
features. Thus the palmprint and hand geometry features of an individual are obtained from
the same hand image. Two schemes for the fusion of features, fusion at the decision level
and at the representation level, were considered. The decision level fusion gave better
results as is detailed in section 5.

2. Image Acquisition & Alignment

Our image acquisition setup is inherently simple and does not employ any special
illumination (as in [3]) nor does it use any pegs to cause any inconvenience to users (as in
[20]). The Olympus C-3020 digital camera (1280 × 960 pixels) was used to acquire the
hand images as shown in figure 2. The users were only requested to make sure that (i) their
fingers do not touch each other and (ii) most of their hand (back side) touches the imaging
table.

2.1 Extraction of hand geometry images

Each of the acquired images needs to be aligned in a preferred direction so as to ca pture the
same features for matching. The image thresholding operation is used to obtain a binary
hand-shape image. The threshold value is automatically computed using Otsu's method
[25]. Since the image background is stable (black), the threshold value can be computed
once and used subsequently for other images. The binarized shape of the hand can be
approximated by an ellipse. The parameters of the best-fitting ellipse, for a given binary
hand shape, is computed using the moments [26]. The orient ation of the binarized hand
image is approximated by the major axis of the ellipse and the required angle of rotation is
the diffe rence between normal and the orie ntation of image. As shown in figure 3, the
binarized image is rotated and used for computing the hand geometry features. The
estimated orientation of binarized image is also used to rotate gray-level hand image, from
which the palmprint image is extracted as detailed in the next subsection.

(a) (b) (c) (d) (e)


Figure 3: Extraction of two biometric modalities from the hand image, (a) captured image from the
digital camera, (b) binarized image and ellipse fitting to compute the orientation (c) binary image after
rotation, (d) gray scale image after rotation (e) ROI, i. e., palmprint, extracted from the center of image
in (c) after erosion.

2.2 Extraction of palmprint images

Every binarized hand-shape image is subjected to morpholo gical erosion, with a known
binary SE, to compute the region of interest, i.e., the palmprint. Let R be the set of non-zero
pixels in a given binary image and SE be the set of non-zero pixels, i.e., structuring
element. The morphological erosion is defined as
RΘSE = {g : SE g ⊆ R} , (1)
where SE g denotes the structuring element with its reference point shifted by g pixels. A
square structuring element (SE) is used to probe the composite binarized image. The center
of binary hand image after erosion, i.e., the center of rectangle that can enclose the residue
is determined. This center coordinates are used to extract a square palmprint region of fixed
size as shown in figure 3.
2.3 Normalization of palmprints

The extracted palmprint images are normalized to have pre-specified mean and variance.
The normalization is used to reduce the possible imperfections in the image due to sensor
noise and non-uniform illumination. The method for normalization employed in this work
is the same as suggested in [27] and is sufficient for the quality of acquired images in our
experiments . Let the gray level at (x,y ), in a palmprint image be represented by I(x,y). The
mean and variance of image, φ and ρ, respectively, can be computed from the gray levels of
the pixels. The normalized image I ′( x , y ) is computed using the pixel-wise operations as
follows:

φd + λ if I ( x, y) > φ ρd {I ( x, y) − φ}
2

I ′( x, y) =  where λ = (2)
φd − λ otherwise ρ

where φ d and ρ d are the desired values for mean and variance, respectively. These values
are pre-tuned according to the image characteristics , i.e., I(x,y ). In all our experiments, the
values of φ d and ρ d were fixed to 100. Figures 4 (a)-(b) show a typical palmprint image
before and after the normalization.

(a) (b)

(c) (d) (e) (f)

(g) (h)
Figure 4: Palmprint feature extraction; (a) segmented image, (b) image after normalization, filtered
images with directional mask at orientation 0o in (c), 90 o in (d), 45 o in (e), 135o in (f), (g) image after
voting, and (h) features extracted from each of the overlapping blocks.

3 Feature Extraction
3.1 Extraction of palmprint features

The palmprint pattern is mainly made up of palm lines, i.e., principal lines and creases. Line
feature matching [8], [15] is reported to be powerful and offers high accuracy in palmprint
verification. However, it is very difficult to accurately characterize these palm lines,
i.e.,their magnitude and direction, in noisy images. Therefore, a robust but simple method is
used here.
Line features from the normalized palmprint images are detected using four line
detectors [28] or directional masks. Each of these masks can detect lines oriented at 0o (h 1),
45o (h 2 ), 90o (h 3 ), and 135o (h4). The spatial extent of these masks was empirically fixed as 9
× 9. Each of these masks is used to filter I ′( x , y ) as follows:
I 1 ( x , y ) = h1 * I ′( x , y ) (3)
where ‘*’ denotes the discrete 2D convolution. Thus four filtered images, i.e., I1 ( x , y ) ,
I 2 ( x, y ) , I 3 ( x, y) , and I 4 ( x, y ) are used to generate a final image I f ( x, y) by
I f ( x , y ) = max { I1 ( x , y ) , I 2 ( x, y ) , I 3 ( x , y ) , I 4 ( x , y ) } (4)
The resultant image represents the combined directional map of palmprint I ( x, y ) . This
image I f ( x, y) is characterized by a set of localized features, i.e., standard deviation, and
used for verification. I f ( x, y ) is divided into a set n blocks and the standard deviation of
gray-levels in each of these overlapping blocks is used to form the feature vector.
vpalm = {σ1 , σ2 , …, σn }, (5)
where σ1 is the standard deviation in the overlapping first block (figure 4(h)).

3.2 Extraction of hand geometry features

The binary image ‡ as shown in figure 3(c), is used to compute significant hand geometry
features. A total of 16 hand geometry features were used (figure 5); 4 finger lengths, 8
finger widths (2 widths per finger), palm width, palm length, hand area, and hand length.
Thus, the hand geometry of every hand image is characterized by a feature vector vhg of
length 1 × 16.

Figure 5: Hand Geometry Feature Extraction.

4 Information Fusion and Matching Criterion

The multiple pieces of evidences can be combined by a number of informatio n fusion


strategies that have been proposed in the literature [29]-[31]. In the context of biometrics,
three levels of information fusion schemes have been suggested; (i) fusion at representation
level, where the feature vectors of multiple biometric are concatenated to form a combined
feature vector, (ii) fusion at decision level, where the decision scores of multiple biometric
system are combined to generate a final decision score, and (iii) fusion at abstract level [31],
where multiple decision from multiple biometric systems are consolidated [31]. The first
two fusion schemes are more relevant for a bimodal biometric system and were considered
in this work. The similarity measure between v (feature vector from the user) and
1
v (stored identity as claimed) is used as the matching score and is computed as follows:
2

∑v v
1 2 (6)
α=
∑v ∑v
1 2

‡ This work uses palm side of hand images to compute hand geometry features, while prior
work [20] -[21] uses other side of hand images.
The s imilarity measure defined in the above equation computes the normalized correlation
between the feature vector v and v . During verification, a user is required to indicate
1 2
his/her identity. If the matching score in eq. (6) is less than some prespecified thres hold
then the user is assumed to be imposter else we decide him/her as genuine.

5 Experiments and Results

The experiments reported in this paper utilize inkless hand images obtained from digital
camera, as discussed in section 2. We collected 1,000 hand images, 10 samples from each
user, for 100 users. The first five images from each user were used for training and the rest
were used for tes ting. The palmprint images, of size 300 × 300 pixels, were automatically
extracted as described in section 2.2. Each of the palmprint images were divided into 144
overlapping blocks of size 24 × 24 pixels, with an overlap of 6 pixels (25 %). Thus a 1 ×
144 feature vector was obtained from every palmprint image. Figure 6 shows the
distribution of imposter and genuine matching scores using palmprint and hand geometry
features. The receiver operating characteris tic curves for three distinct cases, (i) hand
geometry alone, (ii) palmprint alone, and (ii) using decision level fusion with max rule, i.e.,
highest of the similarity measure from hand geometry or palmprint, are shown in figure 7.

Figure 6: Distribution of gennuine and imposter Figure 7: Comparative performance of palmprint and geom etry
scores from the two biometric. features (on 500 images) using decision level f usion.

Some users failed to touch their palm/fingers on the imaging board. It was difficult to use
such images, mainly due to change in scale, and these images were marked as of poor
quality. A total of 28 such images were identified and removed. The FAR and FRR scores
for 472 test images, using total minimum error as criterion i.e., decision threshold at which
the sum of FAR and FRR is minimum, is shown in table 1. The comparative performance
of two fusion schemes is displayed in figure 8. The cumulative distribution of combined
matching scores for the two classes, using dec ision level fusion (max rule), is shown in
figure 9.

Table 1. Performance scores for total minimum error on 472 test images

FAR FRR Decision Thres hold


Palmprint 4.49 % 2.04 % 0.9830
Hand Geometry 5.29 % 8.34 % 0.9314
Fusion at Representation 5.08 % 2.25 % 0.9869
Fusion at Decision 0% 1.41 % 0.9840
Figure 8: Comparative performance of two fusion Figure 9: Comp Distribution of two classes of similarity
scheme on 472 test images. scores for 472 test images.
. .

6 Conclusions
The objective of this work was to investigate the integration of palmprint and hand
geometry features, and to achieve higher performance that may not be possible with single
biometric indicator alone. The results obtained in figure 6, from 100 users, demonstrate that
this is indeed the case. These results should be interpreted in the context of a rather simple
image acquisition setup; further improvement in performance, in the presence of controlled
illumination/environment, is intuitively expected. The achieved results are significant since
the two biometric traits were derived from the same image, unlike other bimodal biometric
systems which require two different sensors/images. Our results also show that the decision
level fusion scheme, with max rule, achieves better performance than those for fusion at the
representation level.

7 Acknowledgments
This research work is partially supported by Hong Kong UGC grant to HKUST in
Emerging High Impact Areas (HIA 98199.EG01) and a Sino Software Research Institute
grant at HKUST (SSRI 01/02.EG12).

References
1. A. K. Jain, R. Bolle, and S. Pankanti, Biometrics: Personal Identification in Networked Society,
Kulwer Academic, 1999.
2. M.-H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: A Survey,” IEEE Trans.
Patt. Anal. Machine Intell., vol. 24, pp. 34-58, Jan. 2002.
3. W. K. Kong and D. Zhang, "Palmprint texture analysis based on low-resolution images for per sonal
authentication," Proc. ICPR-2002, Quebec City (Canada).
4. A. Kumar and H. C. Shen, “Recognition of palmprints using wavelet-based features, ” Proc. Intl.
Conf. Sys., Cybern., SCI-2002, Orlando, Florida, Jul. 2002.
5. W. Li, D. Zhang, and Z. Xu, "Palmprint identification by Fourier transform," Int. J. Patt. Recognit.
Art. Intell., vol. 16, no. 4, pp. 417-432, 2002.
6. J. You, W. Li, and D. Zhang, "Hierarchical palmprint identification via multiple feature
extraction," Pattern Recognition., vol. 35, pp. 847-859, 2002.
7. X. Wu, K. Wang, and D. Zhang, "Fuzzy directional energy element based palmprint
identification," Proc. ICPR-2002, Quebec City (Canada).
8. W. Shu and D. Zhang, “Automated personal identification by palmprint,” Opt. Eng., vol. 37, no. 8,
pp. 2359-2362, Aug. 1998.
9. D. Zhang and W. Shu, “Two novel characteristics in palmprint verification: datum point invariance
and line feature matching,” Pattern Recognition, vol. 32, no. 4, pp. 691-702, Apr. 1999.
10. N. Duta, A. K. Jain, and Kanti V. Mardia, “Matching of palmprint,” Pattern Recognition. Lett.,
vol. 23, no. 4, pp. 477-485, Feb. 2002.
11. J. Funada, N. Ohta, M. Mizoguchi, T. Temma, K. Nakanishi, A. Murai, T. Sugiuchi, T.
Wakabayashi, and Y Yamada, “Feature extraction method for palmprint considering elimination of
creases,” Proc.14 th Intl. Conf. Pattern Recognition., vol. 2, pp. 1849 -1854, Aug. 1998.
12. J. Chen, C. Zhang, and G. Rong, “Palmprint recognition using crease,” Proc. Intl. Conf. Image
Process., pp. 234-237, Oct. 2001.
13. D. G. Joshi, Y. V. Rao, S. Kar, V. Kumar, "Computer vision based approach to personal
identification using finger crease pattern," Pattern Recognition, vol. 31, no. 1, pp. 15-22, 1998.
14. S. Y. Kung, S. H. Lin, and M. Fang," A neural network based approach to face/palm recogn ition,"
Proc. Intl. Conf. Neural Networks, pp. 323-332, 1995.
15. C. - C. Han, H.-L. Cheng, C.- L. Lin and K.-C. Fan, "Personal authentication using palm -print
features," Pattern Recognition, vol. 36, pp. 371-381, 2003.
16. D. P. Sidlauskas, "3D hand profile identification apparatus," U. S. Patent No . 4736203, 1988.
17. I. H. Jacoby, A. J. Giordano, and W. H. Fioretti, "Personal identification apparatus," U. S. Patent
No. 3648240, 1972.
18. R. P. Miller, "Finger dimension comparison identification system," U. S. Patent No. 3576538,
1971.
19. R. H. Ernst, "Hand ID system," U. S. Patent No. 3576537, 1971.
20. R. Sanchez- Reillo, C. Sanchez-Avila, and A. Gonzales-Marcos, "Biometric identification through
hand geometry measurements," IEEE Trans. Patt. Anal. Machine Intell., vol. 22, no. 10, pp. 1168-
1171, 2000.
21. A. K. Jain, A. Ross, and S. Pankarti, "A prototype hand geometry based verific ation system, Proc.
2 nd Intl. Conf. Audio Video based Biometric Personal Authentication, Washington D. C., pp. 166-171,
Mar. 1999.
22. B. Miller, "Vital signs of identity," IEEE Spectrum, vol. 32, no. 2,, pp. 22-30, 1994.
23. L. Hong and A. K. Jain, “Integrating face and fingerprint for personal identification,” IEEE Trans.
Patt. Anal. Machine Intell., vol. 20, pp. 1295-1307, Dec. 1998.
24. S. Ben-Yacoub, Y. Abdeljaoued, and E. Mayoraz, "Fusion of face and speech data for person
identity verification," IEEE Trans. Neural Networks, vol. 10, pp. 1065-1074, 1999.
25. N. Otsu, “A threshold selection method from gray-scale histogram,” IEEE Trans. Syst., Man,
Cybern., vol. 8, pp. 62-66, 1978.
26. S. Baskan, M. M. Balut, and V. Atalay, "Projection based method for segmentation of human
face and its evaluation," Pattern Recognition Lett., vol. 23, pp. 1623-1629, 2002.
27. L. Hong, Y. Wan, and A. K. Jain, “Fingerprint image enhancement : Algorithm and performance
evaluation,” IEEE Trans. Patt. Anal. Machine Intell., vol. 20, pp. 777-789, Aug. 1998.
28. J. R. Parkar, Algorithms for Image Processing and Computer Vision, John Wiley & Sons, 1997.
29. J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas, “On combining classifiers," IEEE Trans. Patt.
Anal. Machine Intell., vol. 20, pp. 226-239, Mar. 1998.
30. S. Prabhakar and A. K. Jain, "Decision level fusion in fingerprint verification," Pattern
Recognition., vol. 35, pp. 861-874, 2002.
31. A. Ross, A. K. Jain, and J. -Zhang Qian, “Information fusion in biometrics,” Proc. AVBPA’01,
Halmstad, Sweden, pp. 354-359, Jun. 2001.

You might also like