=Paper=
{{Paper
|id=None
|storemode=property
|title=Iris Quality in an Operational Context
|pdfUrl=https://ceur-ws.org/Vol-710/paper13.pdf
|volume=Vol-710
|dblpUrl=https://dblp.org/rec/conf/maics/DoyleF11
}}
==Iris Quality in an Operational Context==
Iris Quality in an Operational Context
James S. Doyle, Jr. and Patrick J. Flynn
Department of Computer Science and Engineering
University of Notre Dame
{jdoyle6, flynn}@nd.edu
Abstract the system to only consider samples with the requested ID.
If these probe-gallery comparisons meet the threshold set for
The accuracy of an iris biometrics system increases with the
the system, then the ID claim is verified.
quality of the sample used for identification. All current iris
biometrics systems capture data streams that must be pro- One method that can increase the correct recognition rate
cessed to identify a single, ideal image to be used for iden- of iris biometric systems is to select the highest quality sam-
tification. Many metrics exist to evaluate the quality of an ple of a subject to be used as the representative in the gallery.
iris image. This paper introduces a method for determin- Some systems have an enrollment process that takes multi-
ing the ideal iris image from a set of iris images by using ple samples and then uses the highest quality sample based
an iris-matching algorithm in a feedback loop to examine the on certain criteria, or quality metrics, for the gallery. Sam-
set of true matches. This proposed method is shown to out- ples may also be enhanced after acquisition via contrast
perform other methods currently used for selecting an ideal stretching, histogram normalization, image de-blurring, or
image from a set of iris images. by other methods.
The Iridian LG EOU 2200 used to capture biometric sam-
Introduction ples at the University of Notre Dame is capable of outputting
Biometrics is the use of one or more intrinsic characteris- video as well as still images when enrolling subjects. Mul-
tics to identify one person to the exclusion of others. There tiple methods have been applied to the iris videos recorded
are many characteristics that can be used as biometrics, in- with this camera in order to extract the frame of highest qual-
dividually or combined in some manner to produce a hy- ity for enrollment. For instance, the IrisBEE (Liu, Bowyer,
brid biometric. Identification using the ear (Yan & Bowyer & Flynn 2005) software, discussed in Section 4, uses sharp-
2007), gait (L. Wang et al. 2003), and even body odor (Ko- ness to determine the focus of a frame. It computes a focus
rotkaya ) have been studied alongside more traditional bio- score that can be used to rank the frames, and accordingly
metrics such as face (Bowyer, Chang, & Flynn 2006), fin- pick the sharpest frame. It will be shown later that this ap-
gerprint (Chapel 1971), and iris (Bowyer, Hollingsworth, proach is not optimal.
& Flynn 2008). The human iris is considered to be a ma- This paper suggests a new method for quality-based rank-
ture biometric, with many real-world systems (LG 2010; ing of a set of iris biometrics samples, shows that the new
Daugman & Malhas 2004; UK Border Agency 2010; Life method outperforms the previous quality metric, and offers
Magazine 2010; Welte 2010) already deployed. The sys- possible optimizations to reduce the run-time complexity of
tem designed by John Daugman (Daugman 2002) was doc- determining the sample ranking.
umented to achieve a false match rate of zero percent in a
particular application. In larger deployments there is still Related Work
room for improvement. The National Institute of Standards and Technology
Iris biometric systems can be used for identification pur- (Tabassi, Grother, & Salamon 2010) has tentatively defined
poses in which no claim of identity is presented, or for verifi- the quality of an iris image over 12 dimensions, includ-
cation purposes where a sample and a claim of identification ing contrast, scale, orientation, motion blur, and sharpness.
are supplied for the system to verify. For positive identifi- However, the definitions of these quality metrics, as well as
cation, both methods require that the subject be previously their implementation, are still undergoing debate.
enrolled in the system, making him or her part of the gallery. Kalka et al. (Kalka et al. 2006) have examined multiple
Another sample, known as the probe, will be taken at the quality metrics for iris images, including defocus blur, mo-
time of the identification attempt and compared to gallery tion blur, and off-angle gaze, as well as others. Using the
samples. If the new probe sample closely resembles one of Dempster-Shafer theory, a fused score was calculated for all
the gallery samples, the system reports a positive ID. If no tested quality metrics. This fused score was taken as the
gallery sample matches the probe sample close enough, the overall quality score for an image. Using a cutoff value, it
system will not report a match. For verification, an identity was shown that the performance of a matching system was
claim is also presented along with a probe sample, allowing improved when only images with scores above the cutoff
were included in the experiment.
Kalka et al. (Kalka et al. 2010) have extended their work
to a fully-automated program to evaluate iris quality. The
program creates a single, fused score, based on multiple cri-
teria. Using this program to remove poor-quality samples
from multiple datasets, they were able to demonstrate an im-
provement in recognition performance.
Kang and Park (Kang & Park 2005) propose a method of
restoring images of poor quality due to defocus blur. The
experimental method discussed in the paper showed a de-
crease in Hamming distances when the original image was
compared to the restored image versus when the original im-
age was compared to an artificially defocused image. Figure 1: The LG 2200 iris camera (a) outputs one NTSC
video data stream that is split and amplified using a pow-
Experimental Setup ered splitter. The NTSC data stream is recorded by a DVR
Data Acquisition (b). This method is used to capture iris videos (c). The
IrisAccess software (d) running on a workstation monitors
The method proposed in this paper is general, and able to be the NTSC signal. This method is used to capture still im-
applied to any set of iris samples from the same subject. Iris ages (e).
videos were used in this paper as a means of capturing large
amounts of same-subject samples.
All iris samples were captured using the Iridian LG EOU
2200 video camera. The LG 2200 uses three near-infrared
LEDs to illuminate the eye during acquisition: one above
the eye, one to the bottom left of the eye and one to the
bottom right of the eye. The LG 2200 uses only one illu-
minant at a time to reduce spectral highlights. Software on
a workstation selects one candidate frame under each light-
ing scenario using the proprietary IrisAccess software, of
which one would be enrolled. The LG 2200 also outputs an
NTSC video stream, which is recorded and encoded into a
high-bit-rate MPEG-4 video. All videos captured from this
device were interlaced and recorded at a constant resolution
of 720x480 pixels. Figure 1 depicts the details of the iris
video acquisition setup.
All videos were captured in the same windowless indoor
lab under consistent lighting conditions. Subjects were su-
pervised during acquisition to ensure proper acquisition pro-
cedure was followed. Each video was captured while the
subject was being enrolled by the IrisAccess software.
Data Manipulation
The conversion of the raw data stream to video has a notice-
able effect on the quality of the images. Still frames from
the videos are stretched slightly along the X-axis due to dig-
itizer timing. This was corrected by shrinking the images
5% in the X direction before experimentation began. Im-
ages recorded at 720x480 became 684x480. Additionally, a Figure 2: The three image types discussed in the paper are
contrast-stretching algorithm was applied to all the images shown, with white circles indicating the segmentation and
such that 1% of the output pixels were white and 1% were yellow regions representing spectral highlight removal and
black. These two steps were helpful in improving the qual- eyelash/eyelid masking. Image (a) is from video 05707d173
ity of the input data set. Figure 2 shows an example frame in its unmodified state. Image (b) is the contrast stretched
in original, contrast stretched, and a resized states. version of (a). Image (c) is (b) scaled down by 5% in the X
direction. Image (d) shows a close-up of image (a), demon-
Data Selection strating the segmentation error and necessity for resizing.
To evaluate the performance of quality metrics for iris bio- Image (e) shows the same region after resizing.
metrics in an operational environment, 1000 test videos were
chosen from 11751 videos acquired from 2007-2009 and
stored in the BXGRID biometrics database at University of
Notre Dame (Bui et al. 2009). This video set included five
separate videos for each of 200 subject irises. Because the |(codeA ⊗ codeB) ∩ maskA ∩ maskB|
time required to capture ideal frames illuminated by each HDraw = (1)
light source was variable, the video lengths were not con- |maskA ∩ maskB|
stant. The average video length was 620 frames, with a min-
imum of 148 frames and a maximum of 2727 frames. No
r
n
effort was made to screen the data set to eliminate videos of HDnorm = 0.5 − (0.5 − HDraw ) (2)
900
especially poor quality, to keep the test system as close to
real-world as possible. Neurotechnology VeriEye
A commercially-available biometrics package, Neurotech-
Biometrics Software Packages nology VeriEye (version 2.2), was also used for segmenting
iris images and matching iris templates. Since VeriEye is
IrisBEE a proprietary software package, details about the segmenta-
tion and matching algorithms are not available. The Ver-
IrisBEE, a modified version of the system originally de- iEye matching algorithm reports match scores from 0 to
veloped by Masek (Masek 2003), modified by Liu (Liu, 3235, higher scores indicating better matches. If the Veri-
Bowyer, & Flynn 2006), released by NIST as part of the ICE Eye matcher determines a pair of templates to be of different
challenge dataset, and further modified by Peters (Peters, irises based on a threshold, it reports a match score of 0. For
Bowyer, & Flynn 2009), was used to identify iris boundaries all experiments discussed here, this threshold was disabled
as well as perform matching experiments. IrisBEE contains to capture raw match scores, unlike the Hamming distance
three executables, one for segmenting iris images and pro- scores reported by IrisBEE. Input to the VeriEye matcher is
ducing iris codes, one for performing matching experiments, order dependent, different match scores can be observed de-
and one for calculating rudimentary quality scores, based on pending on the order of gallery and probe. For this paper,
image sharpness. only one matching score was considered, with the older of
The IrisBEE IrisPreprocessor (Liu, Bowyer, & Flynn the two images being the gallery image and the newer of the
2005) uses computer vision algorithms to detect an iris in two being the probe image.
an image. A Canny edge detector and Hough transform
are used to identify the iris-pupil and iris-sclera boundaries. Smart Sensors MIRLIN
Active contours (Daugman 2007) are used to further refine MIRLIN, another closed-source biometrics package was
the region borders. Two non-concentric circles are fitted to used to segment iris images and match iris templates, as
the contour to represent these two boundaries. The iris re- well as to rate images based on four common quality met-
gion formed by these circles is transformed into a rectangle rics. Since MIRLIN is proprietary, specific details about its
through a polar-Cartesian conversion. Each row of the en- segmentation and matching algorithms as well as its qual-
rolled image is convolved with a one-dimensional log-Gabor ity metrics are not available. MIRLIN does provide match-
filter. The complex filter response forms the iris code used ing scores as Hamming distances, but does not supply the
for matching. A fragile bit mask (Hollingsworth, Bowyer, & number of bits used in the comparison, making normaliza-
Flynn 2007) is applied to allow the more stable regions of the tion impossible. As a result, matching scores from MIRLIN
iris code to be used in comparisons. Masking fragile bits im- can not be directly compared to those produced by IrisBEE.
proves the match rate, allowing for better match results than Matching scores are also symmetrical, so comparison order
when comparing iris codes without masking fragile bits. is not important. The four quality metrics that are discussed
IrisBEE also supplies an executable for matching a gallery in this work are contrast, saturation, sharpness, and signal-
list of iris codes to a probe list of iris codes. The IrisMatcher to-noise ratio. MIRLIN also reports the average graylevel
outputs a fractional Hamming distance, using formula (1), and occlusion percentage, but these quality metrics were
for each individual comparison as well as the number of bits not useful in classifying images since they had a very small
that were compared between the two codes, useful in nor- range.
malization (Daugman 2007). All matching results from ev-
ery experiment were normalized using equation (2) (Daug- Quality Metric Experiments
man 2007). Normalized fractional Hamming distances are Multiple quality metrics were considered: the IrisBEE Qual-
referred to as “matching scores” throughout the rest of this ityModule, the MIRLIN quality metrics, and the method
paper. proposed here, evaluated using IrisBEE, VeriEye, and MIR-
For the purposes of predicting performance of a certain LIN. Each separate quality metric was evaluated in a simi-
image in the IrisMatcher, IrisBEE provides a QualityModule lar manner so that results could be compared experimentally
executable that can rate images based on the sharpness of the across the metrics.
whole image or just the iris region if it has been defined. The
used by the QualityModule is described by Kang and Park IrisBEE Quality Module
(Kang & Park 2007). The sum of the response at each pixel The IrisBEE QualityModule is the current metric used at the
of the input image is used as the image’s score. The higher University of Notre Dame to determine an ideal image or
the score, the better that image’s rank. subset of images to be used in matching experiments from
an input set. Since the QualityModule processes individual
images, the 1000 subject videos were split into individual
images. The images were then segmented using the IrisPre-
processor to identify the iris region of each image. Images
that failed to segment were not included in the matching ex-
periment. After segmentation, every frame f of a single
video was given a quality score fs by the QualityModule,
higher scores indicating higher quality images. The frame
scores were then sorted from highest to lowest such that
fs [i] ≥ fs [i + 1] ∀ i.
To test whether this method is predictive of performance,
the entire range of scores must be included in an all-vs-all
matching. Due to the scale of this experiment, a subset of
nine images was chosen from each video: the top-ranked
image, the bottom-ranked image and seven equally spaced
images in between. Selecting images in this manner con-
trolled for the inconsistent video length of the data set. This
reduced dataset was used in an all-vs-all matching experi-
ment using the IrisMatcher. Receiver Operating Character-
istic (ROC) curves were then created for each octile. These
ROC curves can be found in Figure 3 and Figure 4.
With the exception of the top-ranked images, recognition
performance was monotonically decreasing as QualityMod-
ule rank increased. The top-ranked images chosen by the
QualityModule do not perform well in the matching exper- Figure 3: IrisBEE QualityModule experiment results as
iment. As the QualityModule will recommend images with ROC curves, at select rank octiles. Normalized video length
high sharpness, images with eyelid/eyelash occlusion will represented by n.
have artificially high scores. This causes some poor quality
images to be highly ranked. Figure 5 shows a top-ranked
frame with high occlusion and a lower-ranked frame more
ideal for matching experiments, illustrating the drawbacks
to reliance on the IrisBEE metric and other metrics that esti-
mate image quality only.
MIRLIN Quality Metrics
The 1000 subject videos were split into individual im-
ages and segmented using MIRLIN to identify the iris re-
gion of each image. Images that failed to segment were
not included in the matching experiment. After segmenta-
tion, every frame f of a single video was given four qual-
ity scores fcontrast , fsaturation , fsharpness , and fsnr , by
MIRLIN. Four rankings were determined for each video:
fcontrast [i] ≥ fcontrast [i + 1] ∀ i, fsaturation [i] ≤
fsaturation [i + 1] ∀ i, fsharpness [i] ≥ fsharpness [i + 1] ∀ i,
and fsnr [i] ≤ fsnr [i + 1] ∀ i.
The same experimental setup as was used in the IrisBEE
Quality Module experiment was used here. The same phe-
nomenon was noticed with all four of the MIRLIN quality
metrics studied. In all cases, the Rank 0 frames were out-
performed by the Rank n/8 frames. ROC curves for these
experiments can be found in Figured 6.
IrisBEE IrisMatcher
Since the goal of this research is to find the image or set of Figure 4: IrisBEE QualityModule experiment results as
images that performs best in matching experiments to repre- ROC curves, for selected octiles. Normalized video length
sent a subject in a gallery, we investigated the use of the Iris- represented by n.
Matcher itself to rate individual frames. To harness the Iris-
Matcher to pick an ideal representative sample, all frames of
Figure 7: An iris video (a) is broken up into individual
Figure 5: Sample images ranked by the QualityMod- frames (b) and processed by the segmenter (c) to identify and
ule, illustrating non-intuitive quality scores from video mask out occluded regions of the iris. These iris segments
05697d222. Image (a) was the highest ranked image by the (d) are then used as input into the matcher (e), which com-
QualityModule. Image (b) was the 50th ranked frame of putes an all-pairs matching matrix. After some processing
1390 images in the same video. of the matcher results, an optimal frame can be determined
(f).
the video are stored as images. The IrisBEE IrisPreproces-
sor was used to segment all images and to produce template
files for matching. The IrisMatcher performs an all-vs-all
matching of the input images for a single video, produc-
ing a fractional Hamming distance (1) for each unique pair
of images. An average matching score fs per frame f is
found by averaging all matching scores resulting from com-
parisons involving that image. Since each iris video used
in this experiment contained only one subject, all compar-
isons in this step were true matches. A separate matching is
performed for each video. The average matching scores for
each frame of a video are sorted lowest-to-highest such that
fs [i] ≤ fs [i + 1] ∀ i, since low matching scores denote more
similar items or better matches. This process is illustrated in
Figure 7.
Please refer to Section 5.1 for frame selection method.
ROC curves can be found in Figure 8 and Figure 9.
With no exceptions, recognition performance was mono-
tonically decreasing as IrisBEE IrisMatcher rank increased.
The amount of separation between ranks in the higher-
ranked half of the set was orders of magnitude smaller than
the separation of the lower-ranked half.
Neurotechnology VeriEye
The IrisBEE IrisMatcher experiment was repeated using the
Neurotechnology VeriEye package. All images were seg-
mented and templates were generated using the VeriEye
segmenter. The VeriEye matcher performed an all-vs-all
matching of the input images for a single video, producing a
Figure 6: MIRLIN Quality Metrics experiments results as
matching score between 0 and 3235 for each unique pair of
ROC curves, at select rank octiles. Normalized video length
images.
represented by n.
Please refer to Section 5.1 for frame selection method.
ROC curves can be found in Figure 10 and Figure 11.
With no exceptions, recognition performance was mono-
tonically decreasing as VeriEye matcher rank increased. The
amount of separation between ranks in the higher-ranked
Figure 10: VeriEye matcher experiment results as ROC
curves, select octiles. Normalized video length represented
Figure 8: IrisBEE IrisMatcher experiment results as ROC by n.
curves, at selected octiles. Normalized video length repre-
sented by n.
half of the set was orders of magnitude smaller than the sep-
aration of the lower-ranked half.
Smart Sensors MIRLIN
The experiment was again repeated using the MIRLIN pack-
age. All images were segmented and templates were gen-
erated using the MIRLIN −get command. The MIRLIN
matcher performed an all-vs-all matching of the input im-
ages for a single video, using the MIRLIN −compare com-
mand, producing a matching score in the range {0,1} for
each unique pair of images.
Please refer to section 5.1 for frame selection method.
ROC curves can be found in Figure 12 and Figure 13.
As was the case with IrisBEE and VeriEye, with no excep-
tions, recognition performance was monotonically decreas-
ing as MIRLIN matcher rank increased. The amount of sep-
aration between ranks in the higher-ranked half of the set
was orders of magnitude smaller than the separation of the
lower-ranked half.
Quality Metric Comparison
For all experiments, there is an ordering, with higher ranked
frames performing better in all cases except for the top frame
reported by the IrisBEE and MIRLIN quality metrics. Poor
performance of the top-ranked frame can be explained by
the mechanism in which these quality metrics rank images.
Figure 9: IrisBEE IrisMatcher experiment results as ROC Since the quality metrics use image analysis techniques on
curves, at selective octiles. Normalized video length repre- the iris texture to rate a frame it can be heavily influenced
sented by n. by eyelashes or spectral highlights that were not properly
masked, or other artifacts present in an image. These ar-
tifacts in turn produce noisy samples as they are blocking
Figure 13: MIRLIN matcher experiment results as ROC
curves, top octiles. Normalized video length represented by
Figure 11: VeriEye matcher experiment results as ROC n.
curves, top octiles. Normalized video length represented by
n.
parts of the iris texture from being compared, artificially
skewing the match score higher than it should be for a known
match (Bowyer, Hollingsworth, & Flynn 2008). However,
even the best images from these quality metrics did not per-
form as well as the self-matching experiments. The self-
matching image selection process can eliminate samples
with these artifacts from being used in biometric compar-
isons by minimizing (or maximizing in the case of VeriEye)
the average match scores.
The ordering seen in the ROC curves indicates that the
intra-video IrisBEE, VeriEye, and MIRLIN match scores are
predictive of inter-video matching performance. Figure 14
shows the ROC curve for the top-ranked frame from each
of the metrics, as well as the n/8-ranked frames from each
of the quality metrics, as these were the highest performing
ranks for the quality metrics.
Application
Data from the LG 2200 iris camera was used in this paper
because it allows video streams to be captured easily. Other
LG iris cameras, as well as iris cameras from other manufac-
turers, do not allow video information to be output from the
device. However, the use of video in this paper was merely
for convenience. The self-matching algorithm could be ap-
plied to any set of data captured by the same sensor, includ-
ing a small set of still images captured by a newer iris cam-
era.
Figure 12: MIRLIN matcher experiment results as ROC Although production versions of most iris cameras do not
curves, select octiles. Normalized video length represented allow video to be captured from the device, the proprietary
by n. software that interfaces with the camera does capture video
information. This method could be applied to the data steam
that is processed by the proprietary software. However, as
[Daugman 2002] Daugman, J. 2002. How iris recognition works.
volume 1, I–33 – I–36 vol.1.
[Daugman 2007] Daugman, J. 2007. New methods in iris recogni-
tion. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE
Transactions on 37(5):1167 –1175.
[Hollingsworth, Bowyer, & Flynn 2007] Hollingsworth, K.;
Bowyer, K.; and Flynn, P. 2007. All iris code bits are not created
equal. 1 –6.
[Kalka et al. 2006] Kalka, N. D.; Zuo, J.; Schmid, N. A.; and Cu-
kic, B. 2006. Image quality assessment for iris biometric. volume
6202, 62020D. SPIE.
[Kalka et al. 2010] Kalka, N.; Zuo, J.; Schmid, N.; and Cukic, B.
2010. Estimating and fusing quality factors for iris biometric im-
ages. Systems, Man and Cybernetics, Part A: Systems and Hu-
mans, IEEE Transactions on 40(3):509 –524.
[Kang & Park 2005] Kang, B., and Park, K. 2005. A study on iris
image restoration. In Kanade, T.; Jain, A.; and Ratha, N., eds.,
Audio- and Video-Based Biometric Person Authentication, vol-
ume 3546 of Lecture Notes in Computer Science. Springer Berlin
/ Heidelberg. 31–40. 10.1007/115279234.
[Kang & Park 2007] Kang, B. J., and Park, K. R. 2007. Real-
time image restoration for iris recognition systems. Systems,
Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions
Figure 14: ROC curves of top octiles from all metrics, plus on 37(6):1555 –1566.
QualityModule and MIRLIN Quality octile n/8 which dis- [Korotkaya ] Korotkaya, Z. Biometric person authentication:
played best performance. Odor. Lappeenranta University of Technology.
[L. Wang et al. 2003] L. Wang et al. 2003. Silhouette analysis-
based gait recognition for human identification. IIEEE Trans.
this method is somewhat time consuming, it may only be Pattern Analysis and Machine Intelligence 25(1).
feasible to apply it during the enrollment phase. Performing [LG 2010] LG. 2010. IrisID in Action. http://irisid.
this analysis for every probe would delay the response from com/ps/inaction/index.htm.
the system by an unacceptably large amount. [Life Magazine 2010] Life Magazine. 2010. Iris identification
used by parents and teachers to protect children in nj elementary
Conclusions school. http://life.com/image/1965902.
It has been shown through empirical analysis that this [Liu, Bowyer, & Flynn 2005] Liu, X.; Bowyer, K.; and Flynn, P.
2005. Experiments with an improved iris segmentation algorithm.
method selects an ideal representative sample from a set 118 – 123.
of same-subject samples. This method outperforms a
[Liu, Bowyer, & Flynn 2006] Liu, X.; Bowyer, K. W.; and Flynn,
sharpness-based metric used currently and can be used with
P. J. 2006. Optimizations in Iris Recognition. Ph.D. Dissertation,
a commercially available system. University of Notre Dame.
[Masek 2003] Masek, L. 2003. Recognition of human iris patterns
References for biometric identification. Technical report, The University of
[Bowyer, Chang, & Flynn 2006] Bowyer, K. W.; Chang, K.; and Western Australia.
Flynn, P. 2006. A survey of approaches and challenges in 3d [Peters, Bowyer, & Flynn 2009] Peters, T.; Bowyer, K. W.; and
and multi-modal 3d + 2d face recognition. Computer Vision and Flynn, P. J. 2009. Effects of segmentation routine and acquisi-
Image Understanding 101(1):1 – 15. tion environment on iris recognition. Master’s thesis, University
[Bowyer, Hollingsworth, & Flynn 2008] Bowyer, K. W.; of Notre Dame.
Hollingsworth, K.; and Flynn, P. J. 2008. Image under- [Tabassi, Grother, & Salamon 2010] Tabassi, E.; Grother, P.; and
standing for iris biometrics: A survey. Computer Vision and Salamon, W. 2010. Iris quality calibration and evaluation 2010.
Image Understanding 110(2):281 – 307. IREX II IQCE.
[Bui et al. 2009] Bui, H.; Kelly, M.; Lyon, C.; Pasquier, M.; [UK Border Agency 2010] UK Border Agency. 2010.
Thomas, D.; Flynn, P.; and Thain, D. 2009. Experience with Using the Iris Recognition Immigration System
BXGrid: a data repository and computing grid for biometrics re- (IRIS). http://www.ukba.homeoffice.gov.uk/
search. Cluster Computing 12:373–386. 10.1007/s10586-009- travellingtotheuk/Enteringtheuk/usingiris.
0098-7. [Welte 2010] Welte, M. S. 2010. Prison system
[Chapel 1971] Chapel, C. 1971. Fingerprinting: A Manual of looks to iris biometrics for inmate release. http:
Identification. Coward McCann. //www.securityinfowatch.com/Government+
[Daugman & Malhas 2004] Daugman, J., and Malhas, I. %2526+Public+Buildings/1314995.
2004. Iris recognition border-crossing system in the [Yan & Bowyer 2007] Yan, P., and Bowyer, K. W. 2007. Biomet-
UAE. http://www.cl.cam.ac.uk/˜jgd1000/ ric recognition using 3d ear shape. IEEE Transactions on Pattern
UAEdeployment.pdf. Analysis and Machine Intelligence 29:1297–1308.