Kevin W. Bowyer - Ethical and Social Implications of Technology

  • Pitfalls In Studying "Big Data" From Operational Scenarios,
    Estefan Ortiz and Kevin W. Bowyer,
    IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), September 2016.
    pdf of this paper.
    Analyzing a larger dataset is sometimes assumed, in itself, to give a greater degree of validity to the results of a study. In biometrics, analyzing an “operational” dataset is also sometimes assumed, in itself, to give a greater degree of validity. And so studying a large, operational biometric dataset may seem to guarantee valid results. However, a number of basic questions should be asked of any “found” big data, in order to avoid pitfalls of the data not being suitable for the desired analysis. We explore such issues using a large operational iris recognition dataset from the Canada Border Services Agency’s NEXUS program, similar to the dataset analyzed in the NIST IREX VI report.

  • Detecting Facial Retouching Using Supervised Deep Learning,
    Aparna Bharati, Richa Singh,Mayank Vatsa and Kevin W. Bowyer,
    IEEE Transactions on Information Forensics and Security, 2016, to appear.
    pdf of this paper.
    Digitally altering, or “retouching”, face images is a common practice for images on social media, photo sharing websites, and even identification cards when the standards are not strictly enforced. This research demonstrates the effect of digital alterations on the performance of automatic face recognition, and also introduces an algorithm to classify face images as original or retouched with good high performance. We first introduce two face image databases with unaltered and retouched images. Face recognition experiments performed on these databases show that when a retouched image is matched with its original image or an unaltered gallery image, the identification performance is considerably degraded, with a drop in matching accuracy of up to 25%. However, when images are retouched with the same style, the matching accuracy can be misleadingly high in comparison to matching original images. To detect retouching in face images, a novel supervised deep Boltzmann machine algorithm is proposed. It uses facial parts to learn discriminative features to classify face images as original or retouched. The proposed approach for classifying images as original or retouched yields an accuracy of over 87% on the datasets introduced in this paper and over 99% on three other makeup datasets used by previous researchers. This is a substantial increase in accuracy over the previous stateof- the-art algorithm [5] which has shown less than 50% accuracy in classifying original and retouched images from the ND-IIITD Retouched Faces database

  • Exploratory Analysis of an Operational Iris Recognition Dataset from a CBSA Border-Crossing Application,
    Estefan Ortiz and Kevin W. Bowyer,
    CVPR Biometrics Workshop, June 2015.
    pdf of this paper.
    This paper presents an exploratory analysis of an iris recognition dataset from the NEXUS border-crossing program run by the Canadian Border Services Agency. The distribution of the normalized Hamming distance for successful border-crossing transactions is examined in the context of various properties of the operational scenario. The effects of properties such as match score censoring and truncation, same-sensor and cross-sensor matching, sequence-dependent matching, and multiple-kiosk matching are illustrated. Implications of these properties of the operational dataset for the study of iris template aging are discussed.

  • Trial Somaliland Voting Register De-duplication Using Iris Recognition,
    Kevin W. Bowyer, Estefan Ortiz and Amanda Sgroi,
    Biometrics in the Wild Workshop 2015 (BWild 2015), Ljubljana, Slovenia, May 2015.
    pdf of this paper.
    Face and fingerprint were used in de-duplication of the voter registration list for the 2010 Somaliland presidential election. Iris recognition was evaluated as a possible more powerful means of de-duplication of the voting register for the planned 2015 elections. On a trial dataset of 1,062 registration records, all instances of duplicate registration were detected and zero non-duplicates were falsely classified as duplicates, indicating the power of iris recognition for voting register deduplication. All but a tiny fraction of the cases were classified by automatic matching, and the remaining cases were classified by forensic iris matching. Images in this dataset reveal the existence of unusual eye conditions that consistently cause falsenon- match results. Examples are shown and discussed.

  • Factors That Degrade the Match Distribution In Iris Biometrics,
    Kevin W. Bowyer, Sarah E. Baker, Amanda Hentz, Karen Hollingsworth, Tanya Peters and Patrick J. Flynn,
    Identity in the Information Society, 2 (3) 327-343, December 2009.
    DOI link. (open access)
    We consider three "accepted truths" about iris biometrics, involving pupil dilation, contact lenses and template aging. We also consider a relatively ignored issue that may arise in system interoperability. Experimental results from our laboratory demonstrate that the three accepted truths are not entirely true, and also that interoperability can involve subtle performance degradation. All four of these problems affect primarily the stability of the match, or authentic, distribution of template comparison scores rather than the non-match, or imposter, distribution of scores. In this sense, these results confirm the security of iris biometrics in an identity verification scenario. We consider how these problems affect the usability and security of iris biometrics in large-scale applications, and suggest possible remedies.

  • Face Recognition Technology and the Security Versus Privacy Tradeoff,
    Kevin W. Bowyer,
    IEEE Technology and Society, Spring 2004, 9-20.
    pdf of this paper.
    Video surveillance and face recognition systems have become the subject of increased interest and controversy after the September 11 terrorist attacks on the United States. ... This paper analyzes the interplay of technical and social issues involved in the widespread application of video surveillance for person identification.

    The above-listed paper received a 2005 Award of Excellence from the Society for Technical Communication.

  • ``Star Wars'' Revisited - A Continuing Case Study In Ethics and Safety-Critical Software,
    Kevin W. Bowyer,
    IEEE Technology and Society 21 (1), Spring 2002, 13-26.
    pdf of this paper.
    The Reagan-era Strategic Defense Initiative was the focus of a great deal of technical argument relating to the design and testing of safety critical software. ... This paper describes a curriculum module developed around a Reagan-era SDI debate on the theme - 'Star Wars: Can the computing requirements be met?' This module should be appropriate for use in ethics-related or software-engineering-related courses taught in undergraduate Information Systems, Information Technology, Computer Science, or Computer Engineering programs.

    This above-listed paper is highlighted on the SSIT web site as one related to ABET / CSAB accreditation requirements.

  • Ethics and Computing: Living Responsibly In a Computerized World,
    Kevin W. Bowyer.
    IEEE Press (second edition), 2001.

  • Resources For Teaching Ethics and Computing,
    Kevin W. Bowyer,
    Journal of Information Systems Education 11 (3-4), 91-92, Summer-Fall 2000.

  • Pornography On the Dean's PC: An Ethics and Computing Case Study,
    Kevin W. Bowyer,
    Journal of Information Systems Education 11 (3-4), 121-126, Summer-Fall 2000.
    JISE link.

Home

Research

Teaching

Professional

Personal