Deep Learning for Iris Recognition
Deep Learning for Iris Recognition
Study the effect of eye diseases on the performance of iris segmentation and
recognition using transfer deep learning methods
Abbadullah .H Saleh, Oğuzhan Menemencioğlu *
Karabuk University, Department of Computer Engineering, Karabuk 78050, Turkey
A R T I C L E I N F O A B S T R A C T
Keywords: A new deep learning-based iris recognition system is presented in the current study in the case of eye disease.
Iris recognition Current state of art iris segmentation is either based on traditional low accuracy algorithms or heavy-weight
Iris segmentation deep-based models. In the current study segmentation section, a new iris segmentation method based on illu-
Eye diseases
mination correction and a modified circular Hough transform is proposed. The current method also performs a
Deep learning
Transfer learning
post-processing step to minimize the false positives. Besides, a ground truth of iris images is constructed to
Image processing evaluate the segmentation accuracy. Many deep learning models (GoogleNet, Inception_ResNet, XceptionNet,
EfficientNet, and ResNet50) are applied through the recognition step using the transfer learning approach. In the
experiment part, two eye disease-based datasets are used. 684 iris images of individuals with multiple ocular
diseases from the Warsaw BioBase V1 and 1,793 iris images from the Warsaw BioBase V2 are also used. The
CASIA V3 Interval Iris dataset, which contains 2,639 photographs of healthy iris, is used to train deep models
once, and then the transfer learning of this normal-based eye dataset is used to retrain the same deep models
using Warsaw BioBase datasets. Different scenarios for training and evaluating participants are used during
experiments. The trained models are evaluated using validation accuracy, training time, TPR, FNR, PPR, FDR,
and test accuracy. The best accuracies are 98.5% and 97.26%, which are recorded by the ResNet50 (2-layer of
transfer learning) model trained on Warsaw BioBase V1 and V2, respectively. Results indicate that the effect of
eye diseases is concentrated on the segmentation phase. For recognition, no significant impact is recognized.
Some disease that affects the structure (bloody eyes, trauma, iris pigment) can affect the iris recognition step
partially. Our study is compared with similar studies in the case of eye diseases. The comparison proves the
efficiency and high performance of the proposed methodology against all previous models on the same iris
datasets.
1. Introduction factors affect the iris recognition task, including illumination variations,
pose variations, occlusion, and eye diseases [6,7]. Some eye conditions
Human biometrics features a subfield called iris recognition that has affect both iris segmentation and recognition, whereas in other cases,
been widely used in human recognition tasks. Even identical twins’ iris such diseases have the greatest influence on the iris segmentation phase
patterns differ, and because of the iris’ distinctive characteristics, sci- [6].
entists have been able to create incredibly reliable and robust iris There are many eye diseases in which the iris structure can be
identification systems [1,2]. affected and changed, causing problems for iris recognition systems
The main problem with human biometrics is their variation over [8,9]. However, the impact of eye pathology cases on iris segmentation
time (such as facial biometrics). Other problems can also cause some and recognition systems has not been questioned enough.
issues with human recognition, like exposure to wounds, accidents, In the current study, a new iris segmentation method based on
burns, and cosmetic surgery (face, fingerprints, palm prints, etc.) [3]. modified Hough transform and post-processing techniques to minimize
The good news is that the iris is a non-intrusive, high-accuracy bio- the false positive rate that is common in most recent studies. Another
metric. Besides that, previous issues that other biometrics have, are low contributing part of the current study is iris recognition in case of dis-
or do not exist in the iris recognition systems [4,5]. However, many ease, which will be studied, and discussed, and all diseases that may
* Corresponding author.
E-mail addresses: [email protected] (A..H Saleh), [email protected] (O. Menemencioğlu).
https://doi.org/10.1016/j.jestch.2023.101552
Received 31 July 2022; Received in revised form 30 September 2023; Accepted 2 October 2023
Available online 22 October 2023
2215-0986/© 2023 THE AUTHORS. Published by Elsevier BV on behalf of Karabuk University. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
affect the iris recognition process will be identified. The most recent diseases affected the performance significantly by decreasing the FTR.
deep learning models will also be trained using both normal and disease- They also concluded that the iris segmentation step was the most
occurrence-based datasets, and the performance will be evaluated to influential part of iris recognition systems because of those eye diseases.
define the effect of eye diseases on iris recognition part. They also proved that obstructions-related eye diseases affected all steps
In the next section, the related work is presented. Following, the used of the iris recognition system [14].
materials and methods are explained. After that, the results are pre- An effective iris segmentation method was suggested by Rajpal et al.
sented. Finally, the conclusion and future work are illustrated and [15]. They proposed the EAI-Net model based on U-Net architecture,
discussed. treating the segmentation process as a 3-class problem. They also
encoded the complex parts of the iris image. A qualitative and quanti-
2. Related work tative evaluation was applied to evaluate their model using IITD,
UBIRISv2, CASIAv4-Interval, and CASIAv4-Thousand datasets.
Many studies have been conducted in the iris segmentation and Sadhya et al. [16] introduced a mechanism for the efficient extrac-
recognition fields. Some of them used the iris image directly, while tion of consistent bit locations from binarized iris features for biometric
others applied a segmentation process in the first step. systems. Their proposed model formed dynamic clusters of invariant
Roizenblatt et al. [10] proposed an iris recognition system. Their positions from iris feature samples and marked the centers of these
method is applied after cataract surgery. A dataset consisting of 55 eye clusters as the most consistent locations. They also introduced a criterion
images related to 55 individuals is used for building the system. In the called t-consistency for defining the worst-case consistency of IrisCodes
classification step, the Hamming distance measure is applied. Their re- with respect to a tolerance threshold. Their model was tested on the
sults proved that there were six cases in which the recognition failed. CASIA-V3 Interval and CASIAv4 Thousand databases, achieving 0.88 and
Based on their own gathered dataset, Pierscionek et al. [11] built an 0.65 consistency of the IrisCodes, respectively. However, the proposed
iris recognition system. Since the iris ROI was previously obtained, their model has limitations regarding its efficacy for handling noisy
iris dataset did not require segmentation since it just captured the iris databases.
region (not the entire eye). There were only 27 healthy people in the In a study of Shi et al. [17], an accurate pupil detection and tracking
dataset. To normalize the iris, the center of the pupil is also manually system in a low-quality iris images environment was proposed. The
found. The study had no evaluation process; it included the iris locali- LSTM deep learning architecture was used in the motion detection stage.
zation results only. They trained their models using 10,600 images and 75 videos from three
The case of ocular diseases was discussed in the study of Aslam et al. different datasets. They obtained an accuracy of 81 %.
[12]. Fifty-four individuals with anterior segment diseases were used to Francese et al. [18] proposed an iris segmentation and localization
build the iris dataset. They built iris templates before and after treatment system in case of Coloboma eye disease. They studied the influence of
and used them to apply the iris matching. The Hamming distance this disease on the performance of segmentation using two algorithms
measure was used to perform the matching process. The results indi- (Canny edge detector and Daugman’s algorithm). The segmentation
cated a degradation in the performance of iris recognition due to the results showed that 52.63 % and 84.21 % of eye images were incorrectly
anterior segment diseases (corneal edema, iridotomies, and segmented. In the recognition step, they trained the ResNet50 model
conjunctivitis). using the correctly segmented images (a subset consisting of images
The IIT Delhi iris database containing 2,240 eyes was used in the taken of 238 individuals) and got a 99.79 % accuracy. Their results
research of Minaee and Abdolrashidi [13]. They used the ResNet50 deep needed improvement by enlarging the size of the dataset.
learning network and got 95.5 % accuracy. In 2022, a deep iris recognition system in the case of image chal-
Trokielewicz et al. [7] were the first to build a disease eye dataset of lenges was proposed by Jia et al. [19]. They used the well-known
individuals from the Medical University of Warsaw. Their dataset con- Convolutional Neural Networks (CNN) and the multi-level interaction
sisted of 1,288 eye images of 37 individuals, including those with methodology to fuse the iris features obtained from multiple CNN
cataract eye disease. The iris recognition part is done using three built-in models together. They get the benefit of the masking approach to
systems (VeriEye, MIRLIN, and BiomIrisSDK). The researchers remove the noisy parts of iris images of the CASIA-Iris-V4 thousands,
compared these iris recognition systems in terms of cataract eye diseases CASIA Iris-V4-Lamp, and ND-IRIS 0405 datasets. To train, validate, and
and concluded that the false non-match rate increased compared to the test their system, they accumulated 9,578, 1,092, and 5,321 iris photos.
same performance on the normal iris datasets. The False Accept Rates (FARs) for the proposed methodologies on
Later, they built another dataset called Warsaw BioBase [8]. They CASIA-IrisV4-Thousand, CASIA-IrisV4-Lamp, and ND-IRIS-0405 were
used a subset of this dataset, including 1,353 iris images of 219 in- 10.41 %, 5.8 %, and 5.49 %, respectively.
dividuals. Five different categories were used to divide this dataset Recently, a new challenging iris dataset was collected by Hu et al.
(Healthy, Tissue, Clear, Geometry, and Obstructions). The common iris [20]. This new dataset was called the CASIA IRIS-Degradation V1 (DV1)
recognition systems (MIRLIN, VeriEye, and OSIRIS) were trained using dataset, and it includes 3,577 iris images taken from 15 individuals
this subset of the Warsaw BioBase dataset. The study used the Failure to using an acquisition system with no constraints on the imaging condi-
Enroll Rate (FTR) to evaluate the performance of these systems. They tions. The resultant images included some degradation like occlusion,
concluded that the worst performance was related to obstructions and off-angle large scale, illumination variations, and some glass-existence
geometry diseases, in which the FTR was 18.36 % and 5.13 %, situations. For both segmentation and recognition steps, researchers
respectively. used built-in systems. In the segmentation part, they used OSIRIS, Iris-
In 2017, they continued their work on the iris and studied the effect Seg, DeepLab, Masek, RTV-L and U-Net. For the recognition part, Max-
of ocular pathologies on the recognition systems. The accumulated outCNN, Masek, OM, UniNet, DGR, and AFINet are used. Results
dataset of 230 participants consisted of 2,996 eye images and more than indicated that the U-Net segmentation model was the most precise, with
20 eye diseases. The study used four built-in recognition systems a 95.17 % accuracy, while for the iris recognition part, the best model
(OSIRIS, VeriEye, IriCore and MIRLIN), and considered four concepts. was UniNet with a 13.13 % Equal Error Rate (EER).
The first one was the influence of eye diseases on the step of enrollment, Transfer learning (as a type of deep learning) had been used in many
while the second one was the situation of non-visible changes in the eye iris recognition systems (for normal and disease-occurrence datasets).
structure. The third one was the case in which the eye structure changed, Soni et al. combined the NASNet and morphological feature extraction
while the final case dealt with the effect on the iris segmentation part. methods to design an iris recognition system based on Circular Hough
The researchers used a subset of their collected dataset, including 1,353 Transform (CHT). The CASIA dataset was used consisting of 1,344 iris
images of 219 individuals. They proved that the geometrical eye images and obtained a 100 % validation accuracy [21]. The used dataset
2
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
had no challenge. 10750H, and a NVIDIA GeForce GTX 1660 Ti with 6 GB of memory are
The CNN deep network was used in a study by Sujana and Reddy used. On the other hand, for the software support, pertained models
[22]. The CASIA V1 and IITD datasets were used, but only 108 in- (EfficientNet, XceptionNet, Inception_ResNet, ResNet50, and Google-
dividuals were involved in the experiments. They obtained 95.4 % and Net), the image processing toolbox, and the deep learning toolbox of the
98 % accuracy for the CASIA and IITD datasets, respectively. MATLAB programming environment are used.
Recently, the transfer learning of VGG and MobileNet V2 networks
was used in a Ph.D. study [23]. IIT Delhi and MMU2 datasets were used,
3.2. Iris segmentation methodology
and a fused dataset containing 1,957 images of 195 individuals was
built. Without applying any non-linear scaling, the obtained accuracies
There are many known iris segmentation methods that have been
of the MobileNet V2 network for the MMU2 dataset were 86 % for the
used in previous studies. Pupil localization and iris normalization were
validation set and 90 % for the test set, respectively. However, when
applied to CASIA Interval-V4 in [26], getting 98 % accuracy. Iris pattern
non-linear scaling was applied, the accuracies decreased to 86 and 84 %,
extraction was proposed in [27] to reduce occlusion problems, but this
respectively. Conversely, non-scaling improved the validation accuracy
methodology increased the false negative rate significantly. Li et al. [28]
for the IIT Delhi dataset from 82.79 % to 84.774 %. The accuracy was
suggested using K-means and Residual U-Net for iris semantic segmen-
improved by 1.8 % for the test set, which yielded the same outcome. For
tation on the CASIA-Iris Thousands dataset. A CNN deep network was
the VGG network, however, the findings showed that performance was
used by Trokielewicz et al. [29], getting 3.11 % EER. A Fuzzy-based
improved when non-linear scaling was used with a factor of 0.8.
segmentation model was proposed by Nachar and Inaty [30]. Their
Although the study provided a few of results in the case of applying non-
experiments on the iris localization step achieved an accuracy of 99.85
linear scaling, the experiments still do not show the value of doing so
%. In the current study, an improved Circular Hough Transform (CHT)
because it performed poorly in several test scenarios.
segmentation methodology is used based on a previous study [31]. In
Some earlier research utilized datasets that were too small, whereas
the first step of this approach, the eye image is pre-processed using an
others utilized datasets that had no challenge. In some studies, prob-
illumination compensation algorithm based on morphological image
lems, including variations, eyelid occlusion, noise, iris reflection, etc.,
processing. The pupil is first localized using a threshold, then the
were considered. However, there was insufficient research on eye con-
morphological opening process is used to illuminate outlier pixels, and
ditions (some concentrated on a specific disease, while others traded
the “clear border” operation is applied to remove the unwanted border
with too small dataset size). Among them, some focused only on the
regions. The radius of the pupil is then computed and used to determine
impact of eye disorders on iris segmentation, while others investigated
the right border of the iris from which the illumination correction will be
the impact of such diseases using well-known built-in systems. The
applied. The illumination correction strategy adds incremental illumi-
current research is the first to deal with more than 20 eye disease
nation, starting from the right border of the iris until reaching the right
datasets. The research also considers other image degradations like oc-
border of the eye image. The same approach is used for the right eye
clusion, illumination variation, and reflection. This study proposes a
after flipping it so that the operations are still the same for both the left
novel iris segmentation method based on modified circular Hough
and right eye. As a result, the corrected image will be obtained. In the
transform and some post-processing steps to minimize the false positive
next step, the original image is closed using a disk structural element to
rate. The research uses the concept of transfer learning to compare many
configure a mask image that will be subtracted of the corrected image to
different deep models (ResNet50, GoogleNet, EfficientNet, XceptionNet,
remove all illumination variation and get the iris region as a dark part of
Inception_ResNet) and shows the effect of eye diseases on the perfor-
eye images so that the segmentation will be easier.
mance of such models. The study also suggests using two layers of
For the second step, the CHT is applied and driven using a computed
transfer learning and comparing the results. The study will also be
range of iris radius based on the pupil circle, as Eq. (1) shows.
compared to the other state-of-art methods in both iris segmentation and
recognition to define our contribution. IrisRadiusrange = 2 * MaxL ± displacement (1)
Where MaxL is the length of the major axis of the pupil region, and
3. Materials and methodology
the displacement represents a scalar value that is used to tune the al-
gorithm and obtain the best iris circle (Iris ROI).
3.1. Proposed materials
In the next step, the iris mask (circle) is obtained. This mask is
applied to the original eye image to get the iris circle. The problem with
In this research, two iris datasets are used. The first one is the War-
this result is that there will be false positives and false negatives in some
saw BioBase V1 dataset [9,24], containing 684 iris images taken from 53
cases of occlusion, and this issue is solved by using a post-processing
individuals, while the second is the Warsaw BioBase V2 dataset [11,12],
approach in which the high-illumination pixels (representing the non-
in which there are 1,793 iris images of 115 individuals. Both datasets
iris regions) are removed based on a μ-based threshold algorithm (μ
include individuals suffering from more than 20 eye diseases, including
represents the mean value of the gray count of the original eye image
some natural cases (for one of the left or right eye). Some samples
corresponding to only iris ROI coordinates). Comparing this approach to
contain one disease, while others may contain multiple ones. Two or
the previous iris methodology proves that this one removes the non-iris
three sessions are taken for some individuals in both datasets. The res-
parts and preserves the iris patterns, getting high true positive and true
olution of the images is 640x480, and their format is BMP. Warsaw
negative rates and low false positive and false negative rates. Fig. 1 il-
BioBase V2 includes all the diseases of V1 besides new ones [25]. For the
lustrates an example of the proposed iris segmentation method where
hardware support, a PC with a 64-bit OS, RAM-16.0 GB, an Intel CPU i7-
the final iris image has no false positives nor false negatives.
3
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
4
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
Table 1 Table 2
False Segmentation results of Warsaw BioBase V1. False Segmentation results of Warsaw BioBase V2.
Person Disease or eye condition TSR FSR Person Disease or eye condition TSR FSR
ID ID
5
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
Fig. 4. A. The FP and FN of a segmented sample of Warsaw Bio-Base V1. Dataset, B. The ROC curves of the same sample with AUC = 0.99.
6
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
Table 3
Segmentation evaluation metrics of Warsaw BioBase V1 dataset.
Version TSR % TPR % FNR % TNR % FPR % Accuracy % AUC EER %
Table 4
Evaluation results of deep models using Warsaw BioBase V1& V2 datasets.
Metrics/ Warsaw BioBase V1 all Trokielewicz Minaee and Jia et al. Warsaw BioBase V1 Warsaw BioBase V2 all Warsaw BioBase V2
Models samples Study [8] Abdolrashidi 2022 [19] Segmented samples Samples Segmented samples
2019 [13]
ResNet50 GoogleNet MIRLIN, ResNet50 ConvNet ResNet50 GoogleNet ResNet50 GoogleNet ResNet50 GoogleNet
VeriEye and with the
Biom-IrisSDK masking
approach
4.2. The recognition results (individuals) are used. For Warsaw BioBase V2, there will be 3,128 iris
images of 115 classes. Table 4 includes the detailed results of training
Many training scenarios are performed to evaluate the accuracy of and evaluating the deep models using the Warsaw BioBase V1 and
the proposed methodologies. The training scenarios are suggested in Warsaw BioBase V2. The same scenario is performed twice, once for the
different ways. The first scenario (Scenario. 1) studies the effect of eye segmented iris images only and once for the segmented and original iris
diseases on both models (GoogleNet, ResNet50) using one layer of images (named “all samples” in Table 4). All scenarios are performed
transfer learning, while in the second scenario (Scenario. 2), the effect of using a batch size of 10, the Stochastic Gradient Descent optimizer, and a
different splitting criteria (i.e., the percentage of the training, validation, learning rate of 3e-4. All experiments are done using a data augmenta-
and test set) will be studied. tion process, including random reflection, random translation, and
For the third scenario (Scenario. 3), the two layers of transfer random scaling, to improve the training process.
learning will be discussed. For more readability, we will name the four In all cases (validation and test metrics) of Scenario 1, the ResNet50
scenarios by their numbers. For scenario. 1, 1,238 grayscale iris images performance is the best according to validation and test metrics. Using
of Warsaw BioBase V1 corresponding with 53 different classes the segmented and original iris images improves the performance
Table 5
Evaluation results of deep models using Warsaw BioBase V1& V2 datasets under different splitting cases.
Metrics/Splitting case 70 % 15 % 15 % 60 % 20 % 20 % 70 % 20 % 10 %
7
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
Table 6 Table 5 proves the fact that the best splitting scenario is 70 % for
Evaluation results of Xception, Inception-ResNet and EfficientNet deep models training, 20 % for validation, and 10 % for testing.
using Warsaw BioBase V1& V2 datasets using 75%, 15%, 15% training scenario. For more investigation of the trained model performance, the ex-
Model Xception Inception ResNet EfficientB0 periments of the splitting scenario (75 %, 15 %, 15 %) are repeated using
Validation Accuracy (%) 73.1092 86.5546 80.6723
three recent architectures (XceptionNet, EfficientB0 and Inception-
Validation TPR (%) 70.5657 83.6391 78.3639 ResNet) and shown in Table 6.
Validation PPR (%) 77.2765 90.2710 86.5799 Table 6 shows that the best model is the EfficientB0 model with
Validation FNR (%) 29.4343 16.3609 21.6361 84.31 % test accuracy.
Validation FDR (%) 22.7235 9.7290 13.4201
However, the ResNet50 model has better accuracy than the Effi-
Test Accuracy (%) 76.6667 81.2500 86.6667
Test TPR (%) 72.7381 79.8065 84.3155 cientB0 model. Comparing the EfficientB0 model with the GoogleNet
Test PPR (%) 81.7388 84.7756 90.3302 model shows that the EfficientB0 model has better performance. The
Test FNR (%) 27.2619 20.1935 15.6845 only advantage of GoogleNet is that its training time is less than all other
Test FDR (%) 18.2612 15.2244 9.6698 models’ training time in all experiments, as Fig. 5 illustrates.
Table 7 illustrates the enhancement ratios of training deep models
because deep networks work better with more data size. ResNet50 using two layers of transfer learning so that the obtained knowledge of
validation accuracy, for example, has increased by 5.7 % and 5.1 % for the first trained models will be transferred again to the new models, and
Warsaw BioBase V1 and V2, respectively. In terms of AUC and EER, The the training will be repeated to enhance the performance.
EER values range from around 0.03 to 0.18 where the best cases Comparing the original case (one-layer of transfer learning) with the
correspond to ResNet50. Similarly, the AUC of ResNet50 models ach- two-layer case proves the fact that using a new layer of transfer learning
ieved the best performance. that is more related to the studied problem (iris recognition) increases
In Scenario 2, the same iris images of Warsaw BioBase V1 and V2 are the performance significantly. Table 7 shows that the ResNet50 accu-
used but under different splitting options. Three different splitting cases racy, for example, is increased by 2 % and 5.9 % for both validation and
are used in which the training, validation, and test sets are distributed at test sets, respectively, after using the two-layer deep transfer learning on
different percentages. Table 5 presents a detailed comparison between the Warsaw BioBase V1 dataset. Similarly, for Warsaw BioBase V2, the
the GoogleNet and ResNet50 models using different splitting cases.
Scenario 3 aims to transfer knowledge of the learned models after Table 7
training them on an iris image dataset and reuse them (two-layer Performance improvement values after using the second layer of transfer
training scenario) on the Warsaw versions to improve the models’ per- learning of Warsaw BioBase V1&V2.
formance. In the case of segmented iris samples, better recognition ac- Metrics/Models Warsaw BioBase V1 Warsaw BioBase V2
curacy. Since it is well known that GoogleNet and ResNet 50 are trained
ResNet50 GoogleNet ResNet50 GoogleNet
on large-sized images such as humans, plants, animals, furniture, etc.,
somehow, these kinds of images are irrelevant to our datasets. To get the Validation Accuracy (%) 97.41 92.77 93.67 83.43
Validation TPR (%) 97.51 90.54 91.23 81.27
maximum benefit of the transfer learning capability, the ResNet50 and
Validation PPR (%) 98.33 93.93 95.87 89.67
GoogleNet models are trained using the well-known CASIA-Interval-V3 Test Accuracy (%) 98.5 94.26 93.15 83.56
dataset [42] (which is a completely healthy iris dataset); meanwhile, the Test TPR (%) 98.74 94.02 91.2 83.25
results of the original models and the new transferred ones are Test PPR (%) 98.42 96.25 95.61 90.4
compared. AUC 0.99 0.98 0.97 0.95
8
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
Table 8 the highest error rates and less accuracy. Table 9 includes the most
Different epochs effects on training 2-layers transfer learning of ResNet50 on frequent fault samples of the GoogleNet and ResNet50 models trained by
Warsaw BioBase V2 dataset. Warsaw BioBase V2. Table 9 demonstrates that with more eye diseases,
Epochs/Metrics 20 50 100 300 400 500 GoogleNet’s performance degrades. Though, this conclusion is not
Validation Accuracy 93.67 94.87 92.77 94.27 95.78 94.57
applicable to ResNets, which are not impacted by eye diseases like
Validation TPR (%) 91.23 92.05 90.85 92.54 94.8 91.24 GoogleNet. The “0090” and ”0060” samples are the only frequently-fault
Validation PPR (%) 95.87 96.08 94.48 95.76 95.72 95.95 samples in GoogleNet and ResNet. Sample “0090” contains two false
Test Accuracy (%) 93.15 90.4 97.26 95.2 95.2 96.57 discovery errors in ResNet50 and one false negative error in GoogleNet.
Test TPR (%) 91.2 89.27 97.22 94.12 95.06 95.83
On the other hand, sample ”0060” for GoogleNet has one false negative
Test PPR (%) 95.61 95.19 97.97 97.05 96.5 98.2
and two false discovery mistakes. However, when the number of ResNet
model epochs increased, the bulk of these frequent errors vanished. For
validation and test accuracy are increased by 2.74 % and 2.4 %, the EfficientNet model, there are two frequent fault samples (“0016L”
respectively. All other metrics (TPR, PPR, FNR, and FDR) are also and “0064L”) that contain multiple diseases (glaucoma, cataract, iri-
improved. dotomy, posterior synechia, bloody eyes, iris pigment), causing big
For deeper results, we repeated the experiments using a different changes in eye’s tissues. Eye disease has less of an impact on iris
number of epochs to reach the best performance. In Table 8, the results recognition than it does on iris segmentation. Some cases of eye blind-
of training models with epochs from 20 to 500 are shown. Table 8 proves ness are easily recognized. The eye condition in which the iris is covered,
that the bestResNet50 performance corresponds to the 500 epochs or its structure is altered completely or partially has the most significant
training case. To know the most influential diseases on the iris recog- impact on iris recognition. To show the importance of the current
nition system, the performance results are used to identify samples with research, Table 10 includes a detailed comparative study between the
Table 9
Frequent faults of training GoogleNet, ResNet50, and EfficientNet using Warsaw BioBase V2 dataset.
GoogleNet ResNet50 EfficientNet
Table 10
Comparative of the present study and relevant literature.
Authors method Dataset/No. Of Images Dataset Challenge Performance/Observations
Current Research GoogleNet, Warsaw BioBase V1/684 Eye disorders include wide noise factors such as For Warsaw BioBase V1: ResNet50 ACC = 98.5
ResNet50 Images, Warsaw BioBase V2 Images with different sessions, before and after %, GoogleNet ACC = 94.26 %
1793 Images. CASIA-V3.0/ treatment, illumination conditions, Occlusion, For Warsaw BioBase V1: ResNet50 ROC (AUC) =
2639 Images. eyelids and eyelashes noise, and high dilated 0.99, GoogleNet ROC (AUC) = 0.98
pupils. For Warsaw BioBase V2: ResNet50 = 97.26 %,
GoogleNet = 93.15 %
For Warsaw BioBase V2: ResNet50 ROC (AUC) =
0.97, GoogleNet ROC (AUC) = 0.95
Segmentation: EER (V1) = 0.793 %, EER(V2) =
3 %.
Trokielewicz et al. MIRLIN, VeriEye Warsaw BioBase V2/included Eye disorders Obstructions of MIRLIN (FTR) = 18.36 %
2015 [8] and Biom- 1353 images of 219 Obstructions of OSIRIS (FTR) = 8.21 %
IrisSDK individuals. Geometry of VeriEye (FTR) = 5.13 %. The low
performance is attributed to segmentation
errors.
Roizenblatt et al. Hamming 55 images of different eyes Cataract (surgery challenges) There were six cases of unsuccessful recognition.
2004 [10] distance
Minaee and ResNet50 The Indian Institute of Some samples differed in size and color ACC = 95.5 %, They utilized the raw images
Abdolrashidi (Pretrained Technology in Delhi/2,240 distribution, but no challenges were mentioned. without segmentation step but a saliency map to
2019 [13] model) Iris Images of 224 Individuals. recognize the iris ROI used. Additionally, few
samples were used for testing.
Trokielewicz et al. MIRLIN, OSIRIS, 1353 photos of 219 Eye disorders Ocular obstruction diseases cause the majority of
2017 [14] VeriEye and individuals and excluded iris recognition deterioration. Due to the
IriCore eleven distinct irises. segmentation faults, performance decreases.
Jia et al. 2022 ConvNet with the ND-IRIS 0405, CASIA IrisV4- Noise (Iris recognition in less restrictive CASIA Thousand (FAR) = 10.41 %.
[19] masking Lamp, and CASIA-IrisV4 environments) CASIA Lamp (FAR) = 5.8 %.
approach Thousand. ND-IRIS-0405 (FAR) = 5.49 %.
9
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
current study and other previous ones. [7] M. Trokielewicz, A. Czajka, P. Maciejewicz, Cataract influence on iris recognition
performance, Photonics Applications In Astronomy, Communications, Industry,
Where the Sumr represents the total number of true and wrong
And High-Energy Physics Experiments 9290 (2014), 929020.
samples across the horizontal axis, Sumc: the total number of true and [8] M. Trokielewicz, A. Czajka, P. Maciejewicz, Assessment of iris recognition
wrong samples along the vertical axis, TP: true positives (The frequent reliability for eyes affected by ocular pathologies. 2015 IEEE 7th International
false detected samples are emphasized.). Conference On Biometrics Theory, Applications And Systems, BTAS 2015, 2015.
[9] M. Trokielewicz, A. Czajka, P. Maciejewicz, Database of iris images acquired in the
presence of ocular pathologies and assessment of iris recognition reliability for
5. Conclusion disease-affected eyes, in: Proceedings - 2015 IEEE 2nd International Conference On
Cybernetics, CYBCONF 2015, 2015, pp. 495–500.
[10] R. Roizenblatt, P. Schor, F. Dante, J. Roizenblatt, R. Belfort, Iris recognition as a
Concerning ocular diseases, a new iris recognition system based on biometric method after cataract surgery, Biomed. Eng. Online 3 (2004) 1–7.
deep learning is presented. Two various deep learning models (ResNet50 [11] B. Pierscionek, S. Crawford, B. Scotney, Iris recognition and ocular biometrics-the
and GoogleNet) are used for the recognition phase using the transfer salient features, in: Proceedings - IMVIP 2008, 2008 International Machine Vision
And Image Processing Conference, 2008, pp. 170–175.
learning methodology. Two different datasets are used in the experi- [12] T.M. Aslam, Z.T. Shi, B. Dhillon, Iris recognition in the presence of ocular disease,
ments. The Warsaw BioBase V1 dataset, which includes 684 images of J. R. Soc. Interface 6 (34) (2009) 489–493.
iris with various eye diseases, is the first dataset. The second dataset is [13] S. Minaee, A. Abdolrashidi, DeepIris: Iris Recognition Using A Deep Learning
Approach, arXiv:1907.09380v1 (2019).
the Warsaw BioBase V2, which includes 1,793 iris images with more [14] M. Trokielewicz, A. Czajka, P. Maciejewicz, Implications of ocular pathologies for
complicated eye diseases and a higher number of images. The Warsaw iris recognition reliability, Image Vis. Comput. 58 (2017) 158–167.
BioBase V1 and V2 image acquisition processes consider two or three [15] S. Rajpal, D. Sadhya, K. De, P.P. Roy, B. Raman, Eai-net: Effective and accurate iris
segmentation network, in: Pattern Recognition and Machine Intelligence: 8th
sessions.
International Conference, PReMI 2019, Tezpur, India, December 17-20, 2019,
Various training scenarios are used to apply experiments. Different Proceedings, Part I, Springer International Publishing, 2019, pp. 442–451.
deep learning models, various splitting criteria, and transfer learning are [16] D. Sadhya, K. De, B. Raman, P.P. Roy, Efficient extraction of consistent bit locations
from binarized iris features, Expert Syst. Appl. 140 (2020) 112884.
all considered in the suggested scenarios. Many evaluation metrics are
[17] L. Shi, C. Wang, F. Tian, H. Jia, An integrated neural network model for pupil
used to assess the performance of the result models, including TPR, FNR, detection and tracking, Soft. Comput. 25 (15) (2021) 10117–10127.
PPR, FDR, training accuracy, training time, validation accuracy, test [18] R. Francese, M. Frasca, M. Risi, Are IoBT services accessible to everyone? Pattern
accuracy, AUC, and EER. Besides that, an analysis of how eye conditions Recogn. Lett. 147 (2021) 71–77.
[19] L. Jia, X. Shi, Q. Sun, X. Tang, P. Li, Second-order convolutional networks for iris
impact the performance of iris segmentation and recognition is intro- recognition, Appl. Intell. 52 (10) (2022) 11273–11287.
duced and discussed. Results showed that eye diseases could sometimes [20] J. Hu, L. Wang, Z. Luo, Y. Wang, Z. Sun, A Large-scale Database for Less
significantly impact iris segmentation, especially in situations with a Cooperative Iris Recognition, in: 2021 IEEE International Joint Conference On
Biometrics, IJCB 2021, 2021, pp. 1–6.
combination of diseases, pupil issues, some retinal detachments, blind- [21] A. Soni, T. Patidar, M.R. Kumar, K.P. Bharath, S. Balaji, R. Rajendran, Iris
ness, and bloodshot eyes. The results also show that most eye conditions, Recognition using Hough Transform and Neural Architecture Search Network, in:
like glaucoma, cataracts, blurry vision, and some lens and corneal 3rd IEEE International Virtual Conference On Innovations In Power And Advanced
Computing Technologies, I-PACT 2021, 2021, pp. 1–5.
problems, do not affect iris segmentation when they exist separately. [22] K. Devi, An effective feature extraction approach for iris recognition system, Indian
The most significant influence on iris recognition comes from eye J. Sci. Technol. 9 (1) (2016) 1–5.
conditions where the iris is entirely or partly covered, or its structure is [23] C. Science, K.B. Shah, On human iris recognition for biometric identification based
on various convolution neural networks, Gujarat Technological University, 2022.
altered. In addition, the results confirm that most of the special-eye cases
[24] Biometrics and Machine Learning Group, Warsaw-Bio-Base-Disease-Iris v1.0, Warsaw
can be accommodated by iris recognition systems with no major issues. University of Technology, (2015).
The results also show that some eye conditions may reduce the ability to [25] Biometrics and Machine Learning Group, Warsaw-Bio-Base-Disease-Iris v2.1, Warsaw
University of Technology, (2015).
recognize the iris and must be ruled out or treated before being used in
[26] A.M. Mayya, M.M. Saii, Iris recognition based on weighting selection and fusion
biometric systems. fuzzy model of iris features to improve recognition rate, Int, J. Inform. Res. Rev. 03
(2016) 2664–2680.
[27] S.A. Naji, R. Tornai, J.H. Lafta, H.L. Hussein, Iris recognition using localized
Declaration of Competing Interest
zernike features with partial iris pattern, Commun. Computer Inform. Sci. 1183
CCIS (2020) 219–232.
The authors declare that they have no known competing financial [28] Y.H. Li, W.R. Putri, M.S. Aslam, C.C. Chang, Robust iris segmentation algorithm in
interests or personal relationships that could have appeared to influence non-cooperative environments using interleaved residual u-net, Sensors 21 (4)
(2021) 1–21.
the work reported in this paper. [29] M. Trokielewicz, A. Czajka, P. Maciejewicz, Post-mortem iris recognition with
deep-learning-based image segmentation, Image Vis. Comput. 94 (2020), 103866.
Acknowledgments [30] R. Nachar, E. Inaty, An effective segmentation method for iris recognition based on
fuzzy logic using visible feature points, Multimed. Tools Appl. 81 (7) (2022)
9803–9828.
This paper summarizes a portion of Abbadullah .H SALEH M.Sc. [31] Saleh H. Abbadullah, O. Menemencioğlu, A dynamic circular hough transform
thesis research at Karabuk University in 2022, supervised by Dr. based iris segmentation, in: Emerging Trends in Intelligent Systems & Network
Security, Springer International Publishing, Cham, 2022, pp. 9–20.
Oğuzhan Menemencioğlu. [32] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, Q. He, A Comprehensive
Survey on Transfer Learning, Proc. IEEE 109 (1) (2021) 43–76.
References [33] M.A. Nirgude, S.R. Gengaje, Iris Recognition System Based on Convolutional
Neural Network, Springer, Singapore, 2022.
[34] X. Yin, X. Yu, K. Sohn, X. Liu, M. Chandraker, Feature transfer learning for face
[1] V. Kakkad, M. Patel, M. Shah, Biometric authentication and image encryption for
recognition with under-represented data, in: Proceedings Of The IEEE Computer
image security in cloud framework, Multiscale and Multidisciplinary Modeling,
Society Conference On Computer Vision And Pattern Recognition, 2019,
Experiments and Design 2 (4) (2019) 233–248.
pp. 5697–5706.
[2] J.G. Ravin, Iris Recognition Technology (or, Musings While Going through Airport
[35] C.X. Ren, D.Q. Dai, K.K. Huang, Z.R. Lai, Transfer learning of structured
Security), Ophthalmology 123 (10) (2016) 2054–2055.
representation for face recognition, IEEE Trans. Image Process. 23 (12) (2014)
[3] S. Rajarajan, S. Palanivel, K.R. Sekar, S. Arunkumar, Study on the diseases and
5440–5454.
deformities causing false rejections for fingerprint authentication, Int. J. Pure Appl.
[36] K.O. Mohammed Aarif, S. Poruran, OCR-Nets: Variants of Pre-trained CNN for
Math. 119 (15) (2018) 443–453.
Urdu Handwritten Character Recognition via Transfer Learning, Procedia Comput.
[4] Y. Moses, Y. Adini, S. Ullman, “Face recognition: The problem of compensating for
Sci. 171 (2019) (2020) 2294–2301.
changes in illumination direction”, Lecture Notes In Computer Science (Including
[37] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,
Subseries Lecture Notes In Artificial Intelligence And Lecture Notes In
V. Vanhoucke, A. Rabinovich, Going Deeper With Convolutions, in: Proceedings Of
Bioinformatics), 800 LNCS (7): 286–296 (1994).
The IEEE Conference On Computer Vision And Pattern Recognition, Boston,
[5] X. Xie, K.-M. Lam, Face recognition under varying illumination based on a 2D face
Massachusetts, 2015, pp. 1–9.
shape model, Pattern Recogn. 38 (2) (2005) 221–230.
[38] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in:
[6] F. Alonso-Fernandez, J. Bigun, Quality factors affecting iris segmentation and
Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition,
matching. Proceedings - 2013 International Conference On Biometrics, ICB 2013,
Las Vegas, NV, USA, 2016, pp. 770–778.
2013.
10
A..H Saleh and O. Menemencioğlu Engineering Science and Technology, an International Journal 47 (2023) 101552
[39] Internet: Mathworks.com, “”googlenet.Html,“ Mathworks”, available: https:// [41] Internet: Mathwork, “Mathwork, Assess Classifier Performance Mathwork 2020
www.mathworks.com/help/nnet/ref/googlenet.html. . (2021). [Online].”, available: https://www.mathworks.com/help/stats/assess-classifier-
[40] Internet: Mathwork.com, “”resnet50,“ Mathwork”, available: https://www. performance.html.
mathworks.com/help/deeplearning/ref/resnet50.html; [42] “CASIA-IrisV3 Interval, Chinese Academy of Science. http://www.cbsr.ia.ac.cn/
jsessionid=0997fbde6e724213cdf6a294bfa4. (2021). english/IrisDatabase.asp. [Accessed 1 12 2020]”.
11