Enhanced Iris Recognition via SBR Technique
Enhanced Iris Recognition via SBR Technique
*Corresponding Author Mira and Mayer [5] made use of morphological operators to
Abstract—Iris based authentication is a pattern recognition obtain iris borders by applying thresholding, image opening
technique that makes use of Iris patterns, which are analytically and closing. Comparison of different segmentation techniques
unique. In this paper, we propose a novel approach for is done in Ref. [6].
enhanced Iris Recognition (IR) system using Segmentation
based Background Removal (SBR) and Triangular shaped DCT II. P ROBLEM D EFINITION A ND C ONTRIBUTIONS
(TriDCT) extraction techniques. Segmentation is a process of The basic problem definition of any Iris recognition
isolating the objects of interest from the rest of scene. SBR system is to locate the Iris portion precisely. Phenomena like
is used to extract the prominent Iris portion from the eye
image using Circular Hough Transform (CHT). TriDCT helps variation in illumination, occlusion, noisy background and
in extracting reduced set of feature vector. A Binary Particle non Iris regions like eyelids, sclera hampers the process of
Swarm Optimization (BPSO) based feature selection algorithm recognition and identification. The obtained Iris region gives us
is used to search the feature space for optimal feature subset. Region Of Interest (ROI) which in turn provides better features
The experiments performed on MMU and IITD iris databases of Iris with higher representability. This paper introduces a
show significant increase in recognition rate. Our results justify
the effectiveness of the proposed technique. novel technique which deals with segmentation of Iris region
using Circular Hough Transform to obtain the required ROI.
Keywords- Iris Recognition, Segmentation, DCT, Circular Hough
A. Segmentation based Background Removal (SBR)
Transform, Binary Particle Swarm Optimization
Background removal process starts with identifi-
I. I NTRODUCTION cation of the circular regions in a particular image. Most
prominent circular regions will correspond to the pupil and
Biometric systems are inherently pattern recogni- Iris of the image.
tion applications, performing identification using biometric The objective of iris localization is to segment an
features obtained from human characteristics. For personal Iris portion from a given eye image. To perform these tasks,
recognition, biometrics provides a greater deal of security traditional methods like: Integro-differential operator [7], Cir-
and authentication than traditional methods. Widely studied cular Hough transform [8] or their variants are widely used.
biometric measurements include fingerprint, facial expression, In this paper, Circular Hough Transform (CHT) is applied to
Iris, palm patterns, handwriting, signature etc. Among all identify the circular regions. For more accurate and reliable
employed biometric traits, Iris is one of the most promising results, 2-D Discrete Wavelet Transform(DWT) is performed
in terms of speed, accuracy and robustness. Iris is the annular before applying CHT in order to locate the co-ordinates of
region of the eye, bounded by pupil and sclera (white part of circumference and center precisely. Further, an extrapolated
eye). Iris recognition has attracted interest of researchers due Iris mask is created to extract the required Iris portion from
to its distinct nature, better stability and higher recognition the image.
rate [1].
Most works on identification and verification using B. Triangular shaped DCT (TriDCT) extraction
Iris patterns started in 1990s. First Iris recognition algorithm The ability of DCT to represent the energy of an
developed by John Daugman [2] using integro-differential image into its few coefficients has given a new dimension for
operator laid foundation for further advancement of IR system its application in pattern recognition. The obtained coefficients
as an application in pattern recognition. Wildes [3] constructed on applying DCT to an image define feature vector for that
gradient based binary edge-map for Iris segmentation followed particular image. In this paper, we investigated extraction of
by Circular Hough Transform. Du et al. [4] proposed the reduced feature subsets using a triangular region of DCT
Iris detection method based on the prior pupil segmentation. spectrum.
38
Training Phase Hough transform, H, in order to locate the proper circular
Training
Image SBR
Triangular
Feature Selection Feature
boundaries. In voting process each feature point votes for a
Technique DCT based
extraction using BPSO Gallery
set of points in the parameter space [10].
The edge points (xc , yc , r) map to radius and center
gbest
Testing Identify coordinates of the circle which correspond to the maximum
Image Triangular Classifier Test Image
SBR
Technique DCT based
extraction
Feature Selection
using BPSO
(Eucledian point in the Hough space. A detailed example of implementa-
Distance)
tion of CHT to find the circular pupil region from eye image
Testing Phase is shown in Fig. 2.
Fig. 1: Block diagram of Iris recognition system. B. Feature Selection using BPSO
Feature selection refers to selection of optimum
III. F UNDAMENTAL C ONCEPTS number of features so as to reduce the number of features
without degrading the performance of the system. This paper
Fig.1 represents the basic block diagram of proposed
utilizes Particle Swarm Optimization (PSO) algorithm for
Iris recognition system.
implementing feature selection. This algorithm is based on
the idea of collaborative behaviour and swarming in biological
A. Circular Hough Transform (CHT)
populations inspired by the social behaviour of bird flocking
Hough Transform is one of the most common com- or fish schooling [11]. PSO uses a set of swarms, which are
puter vision algorithm that can be used to find the parameters characterized by its position and velocity.
of basic geometric objects (lines, circles) present in an image. Binary version of PSO [12] is called Binary Particle
The Hough transform (HT) was first introduced by Hough Swarm Optimization (BPSO), in which 0’s and 1’s are as-
[9] to locate particle tracks in bubble chamber imagery. signed to the constantly updating position and velocity. Also it
It can give robust detection and identification under noise makes use of sigmoidal function which restricts the velocities
and partial occlusion. HT algorithms give better results in to lie in the range of 0 and 1. The bit value 1 in the binary map
determining the parameters of simple geometric objects if of position vector implies that the particular feature is selected.
their parametric equation is known. Unlike linear HT, Circular The bit value 0 implies that the feature is not selected. BPSO
Hough Transform (CHT) is defined to locate and identify a operation is as follows:
circle characterized by centre (α, β) and radius (r) present in 1) Initialization of c1, c2 and w (inertia) is done.
an image. Circle equation is given by Eq(1) 2) A swarm of N particles is generated with random
(x − α)2 + (y − β)2 = r2 (1) positions and zero velocities as given in Eq.(4).
Mathematically, Eq(2) represents parametric form of the circle X (t, d) = random(position), V (t, d) = 0 (4)
3) When the particles are in search of the optimal solution
x = α + r × cos(Θ) y = β + r × sin(Θ) (2)
in the solution space, they keep track of two things
In the first step, an edge map is obtained from the namely pbest(personal best solution) and gbest(global
first derivatives of intensity values. Let the obtained edge best solution).
points be expressed as (xk , yk ), where k = 1,2,.....,n 4) Particles are evaluated using fitness function given in
Eq.(7) [13].
Considering each edge point (xk , yk ), a circle (C), 5) If the evaluated value from Eq.(7) is greater than the
with radius r is drawn with (xk , yk ) as the center. Each existing pbest then the pbest value is revised.
edge point defines a set of circles in the accumulator space. 6) gbest is updated from the best of the pbest values.
Consider an arbitrary point p = (xc , yc ) on C, then the circle 7) By using the revised gbest and pbest values, the particle
centered on (xc , yc ) with radius r must pass through (xk , yk ). velocities are revised by using Eq.(5).
Hough transform (H) can be written as: 8) Pertaining to the given criteria or completion of itera-
n tions, the convergence of particles will take place, giving
H(xc , yc , r) = h(xk , yk , xc , yc , r) an optimum feature subset.
k=1
V (i, d) = r × w × V (i, d) + c1 × r × (pbest(i, d)−
1 if g(xk , yk , xc , yc , r) = 0 (5)
h(xk , yk , xc , yc , r) = X(i, d)) + c2 × r × (gbest(i, d) − X(i, d))
0 otherwise
Circle, C : g(xk , yk , xc , yc , r) = (xk − xc )2 + (yk − yc )2 1
f (x) = (6)
−r2 1 + e−vi
t+1
(3)
0 for f (x) > 1
An accumulator is an array used to detect the existence X(t + 1) =
1 for f (x) < 1
of the circle in the Circular Hough Transform. From the
edge map, voting process is done in Hough space for the
parameters of circles passing through each edge point. The
L
t
F = (Mi − Mo ) (Mi − Mo ) (7)
strong edge points are used in a voting procedure using the i=1
39
Accumulator Array
Accumulator Array After Filtering
20 20
Gradient Detected Circle
40 40
Original Image Image
60 60 20
10 20 40 60 80
10 20 40 60 80
CHT 40
60
10 20 40 60 80
02
04 20 0 02
06 0 40 04 20 0
0 80 6 06
0 0 40
80 6
00
40
40 2
60
60
80
1
Ni
(i) 1
L
The Next step is to find the center and radius of
Mi = N Wj Mo = N N i Mi
j=1 i=1 the pupil. As discussed before, Circular Hough Transform
where W1 , W2 , W3 .....WL are no. of subjects and (CHT) helps in identification and calculation of parameters
N1 , N2 , N3 .....NL are no. of image samples per subject, of circular region present in an image. CHT when applied to
M1 , M2 , M3 .....ML are means of the corresponding subjects DWT transformed image, accurately detects the circular region
and M0 represent the global mean. of pupil as shown in Fig. 3(iv). If multiple circles are detected,
choose the circle with center coordinates closest to center of
IV. P ROPOSED M ETHODOLOGIES the image.
In this paper, we have implemented a static approach Here, we considered the center of pupil and Iris to
to highlight the desired regions from the subjected eye image be at the same location. An extrapolated Iris Mask (IM) is
and then extracting its dominant spectral features. created using the obtained center point as shown in Fig. 3(v).
IM is geometrically approximated depending upon the radius
A. Segmentation based Background Removal (SBR) length of Iris. IM when convolved (multiplied) with original
Overall performance of any IR system depends on resized image, extracts the Iris region from the original image.
its ability to extract the desired Iris portion and eliminate the Considering the obtained Iris portion will result in successful
noisy regions like eyebrows, eye lids etc. The fundamental removal of background region which gives segmented image
idea is to find the radii and the location of center in order to as shown in Fig 3(vi).
reconstruct the circles with these values. The entire process is Segmented image may contain non-Iris regions and
based on the fact that the pupil is darker than Iris. other noisy portions, but its effect is negligible and hence
Hence, the first step is to bring out dark region of neglected here. The obtained segmented image is utilized
pupil. To accomplish this task, we proposed a pre-processing in further steps of IR system. By observing the obtained
technique of 2-D Discrete Wavelet Transform (DWT). The segmented image we can say that the size of original image
wavelet transformation is a mathematical tool that can examine is reduced, which in turn improves the computation speed
an image in time and frequency domains, simultaneously. and reduces the number of features. We can also customize
We used Haar as the mother wavelet because of its simple the database by storing the obtained segmented images which
algorithm and high recognition rate. DWT computes four can improve the overall timing and reduces the complexity of
coefficients, namely cA (all components), cH (horizontal com- system.
ponent), cV (vertical component), cD (diagonal component).
We have used cA coefficients to get reduced set of Iris features. B. Triangular shaped DCT (TriDCT) extraction
2-level decomposition using DWT performs transformation The Discrete Cosine Transform is an illumination
twice and scales down the image to one fourth of original normalization approach which is used to represent data of
along rows and columns. On applying 2-D DWT to the image, an image in terms of its frequency components. DCT avoids
unwanted regions are eliminated as shown in Fig. 3.(ii),(iii). complexity for arithmetic computations and offers ease of
40
(iv) (v) (vi) Segmented
Circular Hough Extrapolated
Transform Iris Mask Image
(iii)
(I)
(ii)
Original
2D-DWT 2D-DWT Image Resize
20 20
20 5
40 40 40
30
60 5 25 30
20 40 60 80 60 60
10 20 40 60 80 10 20 40 60 80
(I) (ii) (iii) (iv) (v) (vi)
Fig. 3: (i) Original Image (ii) cA component after single level DWT (iii) cA component after two level DWT (iv) After CHT (v) Iris Mask (vi) Segmented Image.
10
for square and triangular shaped DCT extraction method.
30
30
0 10 20 30 40 0 20 3
0 40 5
50
(iv) (v)
to K pixels from origin of the spectrum. Then,
(K × (K + 1)) Fig. 4: (i) Segmented Iris (ii) Spectrum of square DCT (iii) Spectrum of TriDCT
(iv) Surf plot of square DCT (v) Surf plot of TriDCT.
Number of Features extracted =
2
On comparison with conventional square extraction method,
the features extracted by TriDCT method are reduced upto a based on recognition rate and timing analysis.All the tests are
factor of 0.52. run on a PC powered by Intel Core i7 Processor with clock
As we know, extracting the best feature set exhibits frequency of 2.4GHz and 8GB of RAM.
performance of the system. In this extraction methodology
A. Experiment 1 - MMU database
there is reduction in feature count which in turn improves
speed of system by reducing training and testing time. MMU database consists of 45 subjects each con-
taining 5 eye images. Size of each image is 320 × 240
V. E XPERIMENTAL R ESULTS A ND D ISCUSSIONS pixels. Images are resized to 80 × 60 while performing our
The proposed technique was implemented in MAT- experiments. Sample images are shown in Fig. 5(ii).
LAB [16] and experimented on two databases: IITD database The desired Iris portion is extracted using the pro-
[17] and MMU database [18]. Experiments are performed for posed methodology which results in better revelation of feature
different set of training to test ratios. The results are tabulated vector. Radius range of [14 16] is considered best for creating
41
,
LL
obtained results are tabulated in Table II. Variation of RR with
Fig. 5: Sample images of (i) IITD Database (ii) MMU Database. respect to Iris Radius considered while experimentation is as
TABLE I: MMU Database Results.
shown in Fig. 7.
(a) Recognition Results highlighting the advantage of proposed technique. TABLE II: IITD Database Results.
42
paper uses a static approach to create Iris Mask (IM) using
fixed radius length. In future, more sophisticated approach
can be adopted in order to get rid of undesired portions of
eye image. Reliability of proposed method can be investigated
by involving larger databases of eye images with diverse
variations.
R EFERENCES
[1] Anil K. Jain, Arun Ross and Salil Prabhakar : “An Introduction
to Biometric Recognition”, IEEE Transactions on Circuits and
Fig. 7: Recognition Rate vs Iris Radius for IITD database. Systems for Video Technology, Special Issue on Image- and Video-
Based Biometrics, vol. 14, no. 1, 2004.
[2] John G Daugman : “High confidence visual recognition of persons
by a test of statistical independence”, IEEE Transactions on Pattern
Analysis And Machine Intelligence, vol.15, no.11, 1993.
[3] R. P. Wildes : “Iris recognition: an emerging biometric technology”,
Proceedings of the IEEE, vol. 85, no.9, pp.1348–1363, 1997.
[4] Y. Du, R. Ives, D. Etter, T. Welch, and C. Chang : “A new approach to
iris pattern recognition”, Proceedings of the SPIE European Symposium
on Optics/Photonics in Defence and Security, vol. 5612, pp. 104–116,
2004.
[5] J. Mira and J. Mayer : “Image feature extraction for application of bio-
metric identification of iris - a morphological approach”, Proceedings
of the 16th Brazilian Symposium on Computer Graphics and Image
Processing, pp. 391–398, 2003.
[6] Surjeet Singh and Kulbir Singh :“Segmentation Techniques for Iris
Recognition System”, International Journal of Scientific & Engineering
Research volume 2, 2011.
Fig. 8: Plot of Experimental results for IITD database for varying Training to Testing [7] J. Daugman : “New Methods in Iris Recognition”, IEEE Transactions
ratios. on Systems, Man, and Cybernetics, vol. 37, no. 5, 2007.
[8] Li Ma, Yunhong Wang and Tieniu Tan : “Iris recognition using circular
symmetric filters”, Pattern Recognition, vol. 2, 2002.
[9] Paul V.C. Hough : “Method and means for recognizing complex
patterns”, US Patent 3069654, 1962.
[10] W. M. K. Wan Mohd Khairosfaizal and A. J. Nor’aini : “Eyes Detection
in Facial Images using Circular Hough Transform”, 5th International
Colloquium on Signal Processing and Its Applications (CSPA), 2009.
[11] James Kennedy and Russell Eberhart : “Particle Swarm Optimization”,
Proceedings of the IEEE International conference on Neural Networks,
vol. 6, pp. 1942-1948, 1995.
[12] J. Kennedy and R. C. Eberhart : “A Discrete Binary Version of the
Particle Swarm Algorithm”, Proceedings of the IEEE International
conference on Systems, Man, and Cybernetics, vol. 5, pp. 4104-4108,
1997.
[13] Chengjun Li and Harry Wechsler : “Evolutionary Pursuit and Its
Application to Face Recognition”, IEEE Transactions Pattern Analysis
and Machine Intelligence, vol. 22, no. 6, pp. 570-582, 2000.
Fig. 9: Plot of Experimental results for MMU database for varying Training to Testing [14] Hafiz Imtiaz and Shaikh Anowarul Fattah: “A DCT-based Local Fea-
ratios. ture Extraction Algorithm for Palm-print Recognition”, International
Journal of Scientific And Technology Research, vol. 1, 2012.
[15] Bouchra El Qacimy, Mounir Ait Kerroum and Ahmed Hammouch:
“Feature Extraction based on DCT for Handwritten Digit Recognition”,
VI. C ONCLUSIONS International Journal of Computer Science Issues (IJCSI), vol. 11, no.
2, 2014.
A novel approach for Iris Recognition (IR) is pro- [16] Matlab : http://www.mathworks.in/.
posed which uses the Segmentation based Background Re- [17] IIT Delhi database - http://web.iitd.ac.in/biometrics/DatabaseIris.htm.
moval (SBR) and TriDCT technique for extracting enhanced [18] MMU Database - http://pesona.mmu.edu.my/∼ ccteo/.
[19] Abhiram M.H, Chetan Sadhu, K. Manikantan and S.Ramachandran :
Iris features. SBR technique successfully detects the desired “Novel DCT Based Feature Extraction for Enhanced Iris Recognition”,
Region Of Interest (ROI). It also helps in eliminating the non- International Conference on Communication Information & Computing
iris regions and other noisy background details. TriDCT and Technology (ICCICT), 2012.
[20] Swathi S Dhage, Sushma Shridhar Hegde, Manikantan K and S
BPSO help in extracting the optimal count of feature subsets. Ramachandran : “DWT-based Feature Extraction and Radon Trans-
Euclidean classifier is applied to identify the closest object form based Contrast Enhancement for Improved Iris Recognition”,
from trained features. International Conference on Advanced Computing Technologies and
Applications (ICACTA), 2015.
The experiments performed show significant in- [21] Rakesh S M, Sandeep G S P, K Manikantan and S Ramachandran
crease in Recognition Rate (RR) when applied to IITD and : “DFT-based Feature Extraction and Intensity Mapped Contrast En-
MMU databases. Detailed timing analysis is achieved for hancement for Enhanced Iris Recognition”, International Conference
on Signal and Image Processing (ICSIP), 2013.
different TTR. Our method is found robust to handle image
variations like contrast, illumination and shift variance. This
43