Cross-Sensor Comparison Competition 2013
The intent of the Cross-Sensor Comparison Competition, is to analyze and improve the performance of iris recognition between two iris sensors from the same manufacturer.
As research continues in the field of iris biometrics, iris sensors with improved technology are developed. For users of these sensors, updates to the technology are desirable since a new sensor may provide objective performance improvements over prior generation sensors. However, when these upgrades are made, the disposition of data (especially enrollment data) collected using prior generation sensors is open to question. Should all data be discarded and new enrollment data be collected? Should this data be kept in conjunction with new enrollment data? Or are the templates generated from older data of sufficient quality to avoid re-enrollment?
The cross-sensor comparison problem attempts to explore this issue through a structured experiment with multiple data sets from multiple sensors, specifically the LG2200 and LG4000 sensors. We have assembled data sets from each sensor and are providing a baseline matching result using in-house iris recognition software. The result indicates that there is a performance drop when the sensor model differs between enrollment and probe image acquisition. The challenge is to identify algorithms that do not show a performance drop, or at least less of a drop.
The Data Sets
The data sets for this competition, labeled the Cross-Sensor Dataset, is available for download at http://cse.nd.edu/~cvrl/CVRL/Data_Sets.html . This dataset consists of 27 sessions of data with 676 unique subjects. An average session contains 160 unique subjects which have multiple images from both the LG2200 and LG4000 iris sensors. Every subject occures in at least two sessions across the entire data set. This data set spance three years, 2008 to 2010. The initial images are taken from both sensors are 640 by 480. There are additional images included in this data set, known as the modified LG2200 images. The original images have beeen stretched vertically by 5% to compensate for the non-unit aspect ratio of the digitizer used in the LG2200 computer-hosted runtime acquisition system (this elongation was suggested by Imad Malhas of IrisGuard Inc. in 2009). Hence these additional images are of size 640 by 504. Below you can see LG2200 and LG4000 images of the same subject. Visual inspection suggests that the LG4000 provides a better quality iris image. Both sensors use near infrared (NIR) illumination. The LG2200 uses three NIR LEDs at the top, bottom left, and bottom right of the camera, whereas the LG4000 uses two clusters of LEDs with varying wavelengths. LG2200 images often contain alaising and other artifacts. Although the LG4000 images seem "cleaner", images from both cameras exhibit specular highlights, typically in the pupil.
Further, due to lack of completion in last yearÕs competition, the full data set will be broken down into two smaller subsets. The smallest set will contain images from only 100 subjects, across all sessions as to have multiple images and dates of acquisitions. The second subset will contain images from 300 subjects across all sessions, and will include the 100 subjects from the first small subset. Baseline results will be provided for the small, medium, and full data sets. We hope that by breaking the data up we will enable more efficient testing of participating algorithms.
|LG 2200 - subject 02463||LG 4000 - subject 02463|
The following shows the LG2000 image from subject 02463 next to the stretched version of this image. The adjustment creates a more circular pupil and enchances the quality of the image.
|LG 2200 - subject 02463||LG 2200 (vertically stretched) - subject 02463|
The Baseline Comparison
By comparing the recognition rates obtained from same-sensor gallery and probe data sets with the recognition rate obtained from an experiment where the LG2200 is the source of the gallery data and the LG4000 is the source of probe data, one can see there is a definite degradation in the recognition rate. In particular, the newer technology performs the best, the older technology performs worse, and the worst rate occurs when we match between the two. Can the recognition between the two sensors be improved by some other algorithm or pre-processing technique?
Below are the match and nonmatch score distributions and the ROC curves for the five experiments of the challenge:
LG2200 probe, LG2200 gallery ("LG2200 and LG2200")
LG2200R probe, LG2200R gallery ("LG2200R to LG2200R")
LG4000 probe, LG4000 gallery ("LG4000 to LG4000")
LG4000 prove, LG2200 gallery ("LG4000 to LG2200")
LG4000 probe, LG2200R gallery ("LG4000 to LG2200R")
These match score were reported using the University of Notre Dame Computer Vision and Research Laboratory iris matching software, IrisBEE. IrisBEE creates and compares image templates in a probe set ot all the image templates in a gallery set, with both collections defined by the user. The matcher then returns the Hamming distance of each pair of templates, which indicates the fraction of bits that disagree. These results show that the LG4000 performs the best, the resized LG2200 performs better than the initial LG2200 images but worse than the LG4000, and cross sensor comparisons perform the worst in term of match scores and false accept vs. false reject rate.
|Match Score Distribution||ROC Curve|
The objective of the Cross Sensor Comparison Challenge is to establish the performance of your algorithm(s) on the Cross-Sensor Data set. The ROC and match score distributions will be the basis for evaluation of algorithms submitted in response to the challenge. You should include these items with your submission:
1. Match and nonmatch score distributions as histograms on a relatively fine scale and ROC curves using 1000 points with false accept rate on the x-axis ranging from 0.00011 to 1.0. Both of these data items should be supplied in CSV files for easy report generation. The score distributions should be labeld columns with the bin center locations in column 1, the match score distribution in column 2, and the nonmatch score distribution in column 3. The ROC curve should be reported with FAR in column 1 and FRR in column 2, with FAR increasing as the row number increases.
2. A document describing what you wish to reveal about your algorithm, observations concerning the dataset, unique features of your matcher which were not well evaluated given the data. Drafts of your report will be circulated among other participants
These items should be submitted in a zip or tar.gz file.
The evaluation results will be discussed in a report delivered at BTAS 2013. Please email Amanda Sgroi at email@example.com to register, using ÒCross Sensor Comparison Competition 2013Ó as the subject line.
á July 22, 2013 - Submission of test results are due.
á August 1, 2013 - Submission of poster containing test results to BTAS 2013.
á September 29 - October 2, 2013 - Final report will be delivered at BTAS 2013.
á Amanda Sgroi
á Kevin Bowyer - http://www.cse.nd.edu/~kwb/
á Patrick Flynn - http://www.cse.nd.edu/~flynn/
Department of Computer Science & Engineering
University of Notre Dame
The dataset was acquired with the support of the National Science Foundation (NSF), the Central Intelligence Agency (CIA), the Biometrics Task Force (BTF), and the Technical Support Working Group (TSWG).