When Biometrics Fail: Gender, Race, and the Technology of Identity

When Biometrics Fail: Gender, Race, and the Technology of Identity

by Shoshana Amielle Magnet
When Biometrics Fail: Gender, Race, and the Technology of Identity

When Biometrics Fail: Gender, Race, and the Technology of Identity

by Shoshana Amielle Magnet

Paperback(New Edition)

$25.95 
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview


From digital fingerprinting to iris and retina recognition, biometric identification systems are a multibillion dollar industry and an integral part of post-9/11 national security strategy. Yet these technologies often fail to work. The scientific literature on their accuracy and reliability documents widespread and frequent technical malfunction. Shoshana Amielle Magnet argues that these systems fail so often because rendering bodies in biometric code falsely assumes that people's bodies are the same and that individual bodies are stable, or unchanging, over time. By focusing on the moments when biometrics fail, Magnet shows that the technologies work differently, and fail to function more often, on women, people of color, and people with disabilities. Her assessment emphasizes the state's use of biometrics to control and classify vulnerable and marginalized populations-including prisoners, welfare recipients, immigrants, and refugees-and to track individuals beyond the nation's territorial boundaries. When Biometrics Fail is a timely, important contribution to thinking about the security state, surveillance, identity, technology, and human rights.

Product Details

ISBN-13: 9780822351351
Publisher: Duke University Press
Publication date: 11/11/2011
Edition description: New Edition
Pages: 226
Product dimensions: 6.10(w) x 9.10(h) x 0.60(d)

About the Author

Shoshana Amielle Magnet is Assistant Professor in the Institute of Women’s Studies and the Department of Criminology at the University of Ottawa. She is a co-editor (with Kelly Gates) of The New Media of Surveillance.

Read an Excerpt

WHEN Biometrics FAIL

Gender, Race, and the Technology of Identity
By Shoshana Amielle Magnet

Duke University Press

Copyright © 2011 Duke University Press
All right reserved.

ISBN: 978-0-8223-5135-1


Chapter One

BIOMETRIC FAILURE

Stop! Right now, think of how many passwords and personal identification number (pin) codes you have to remember. How often do you forget them? It is very inconvenient to remember those codes. Now do you have your fingers, eyes, voice, and face with you? The answer hopefully is yes! Have you ever forgotten any of those body parts? Not very likely! What if you could use those body parts instead of passwords and pin codes to verify who you are? Would that not be more convenient? It also seems logical that it could be a more secure way of authenticating a person. PAUL REID, BIOMETRICS FOR NETWORK SECURITY

Biometrics are celebrated as perfect identification technologies. They will secure your laptop, identify terrorist threats, and reduce crime by stabilizing the mercurial identities of criminalized individuals. As in the epigraph that begins this chapter, biometric scientists imagine these technologies as more highly evolved, efficient, and accurate versions of older techniques of identification. Why have to remember a password or pin code when your body can easily replace it—a body that can now be digitalized? Isn't it logical that digital technologies function better than their analog counterparts? Much is made of the potential benefits biometric technologies can bring to the knowledgeable consumer in this textbook, published in 2004:

With the cost of biometric technology falling, it will soon be reliable and cheap enough for everyday use. For example:

- Enter your favorite coffee shop in the morning, and your coffee order is waiting for you at the counter.

- Your daughter's boyfriend, whom you do not like, comes to the house and the door won't open to let him in.

- Your own little personal identification device recognizes someone you have not seen in years and provides you with his/her name. This could result in no more embarrassing blank looks on your face. (P. Reid 2004:231)

In these examples biometric technologies are able to supply simple solutions to domestic problems, from patriarchal protection to the limits of long-term memory. Not restricted to the high-stakes tasks of securing the border or ending crime, biometrics are able to additionally (and seamlessly) break down the quotidian into moments for market, as scientists imagine these technologies to work perfectly, identifying and verifying individual bodies without error while providing a new range of consumer services. And yet biometric technologies do not work in the straightforward manner such discourse would have us believe, whether industry representations, government accounts, or laws and policies mandating their deployment. Although biometrics are explicitly sold to us as able to circumvent problematic human assumptions about race, gender, class, and sexuality; in fact, this book demonstrates that it is upon rigid and essentialized understandings of race and gender that these technologies rely.

In this chapter I analyze the science constructing biometric technologies and show how they rely upon outdated and erroneous assumptions about the biological nature of identity. In investigating the scientific principles upon which biometrics are based, I ask whether these new identification technologies perpetuate existing forms of inequality. I start with the way the biometric industry describes how the technologies work, then examine the assertions about the alleged potential of these technologies. In contextualizing these claims I demonstrate that biometric technologies are the latest in a long line of identification technologies claimed to be impartial and objective. Considering the ways that culture is always encoded into technology, I revisit the central question of this book: What happens when biometrics fail? I show that although biometrics were marketed on the basis of their ability to avoid human pitfalls, these very same human assumptions about the biological nature of identity are encoded right into these new identification technologies.

Defining Biometrics

Biometrics is the science of using biological information for the purposes of identification or verification. Although the term biometrics dates back to the early twentieth century and was used to refer to mathematical and statistical methods applied to data analysis in the biological sciences, the focus of this book is on digital biometrics. A biometric attribute is defined as a "physical or psychological trait that can be measured, recorded, and quantified" (P. Reid 2004:5). The process of acquiring the information about the physical or behavioral trait—whether a digital fingerprint, iris scan, or distinctive gait—and then storing that information digitally in a biometric system is called enrollment (P. Reid 2004:6; Nanavati, Thieme, and Nanavati 2002:17). A template is the digital description of a physical or psychological trait, usually containing a string of alphanumeric characters that expresses the attributes of the trait (P. Reid 2004:6). Before the biometric data are converted to a digital form, they are referred to as raw data (Nanavati, Thieme, and Nanavati 2002:17). Raw biometric data are not used to perform matches—they must first be translated into a biometric template before they can be utilized—a process that is achieved with the help of a biometric algorithm. Biometric algorithms are frequently described as recipes for turning a person's biological traits into a "digital representation in the form of a template" (P. Reid 2004:6). This recipe is usually proprietary, and it is what a biometric technology company sells, arguing that their recipe for creating a template from raw biometric data is better than another company's recipe.

Vendors represent biometric technologies as able to answer two questions. The first question refers to identification and asks, Who am I? Often described as a 1:n matching process, the presentation of a biometric template created in real time (called a live biometric) is checked against a database of stored biometric templates. Used more commonly in security and law enforcement applications, this process allows for one person to be checked against a list of persons (P. Reid 2004:14). The second question that biometric technologies are imagined to be able to answer concerns verification: Am I who I say I am? Referred to as a 1:1 matching process, verification checks the presentation of the live biometric with the person's template stored in the database to determine if they match. If the live biometric is the same as the stored biometric, there is a match and the identity is verified. Verification is held up in biometric discourse to be a much quicker process than identification, since it must check only one live biometric against one stored biometric, rather than checking a particular biometric against an entire database. Biometric technologies that rely on verification are more commonly used in physical and informational access applications, including secure building and computer network access (14).

The industry claims that a central innovation provided by the technology is that biometric information can neither be stolen from nor forgotten by the individual, as it depends on the measurement of physical or behavioral traits. As biometric technologies measure bodies and bodily behavior (such as gait), the "provider of the trait always has them with him or her" (P. Reid 2004:3) and cannot substitute another's information. Industry representatives thus claim that biometric technologies provide convenience, security, and accountability (Nanavati, Thieme, and Nanavati 2002:4–5).

The biometric industry does acknowledge that errors do occur and cites three errors as particularly common (P. Reid 2004; Nanavati, Thieme, and Nanavati 2002). One is the false acceptance rate, in which a person who is not you is accepted as you. The false rejection rate occurs when you are not accepted as you. This is usually given as a percentage of the chance of somebody else erroneously being identified as you. Another type of error is the failure to enroll, often given as a percentage of the possibility of someone failing to be enrolled in the system at all (P. Reid 2004:6). In Biometrics for Network Security Paul Reid asserts that current problems with biometric technologies "will be solved with time, money and technological advances" (121).

Biometric technologies are broken down into two categories: active biometrics and passive biometrics. Active biometric technologies depend on the user actively submitting information to a biometric scanner. For example, finger scanning relies on the person actively placing a finger on the plate in order to have the print captured. Passive biometric technologies allow for the covert collection of biometric data, as in the case of smart surveillance cameras able to secretly capture a person's facial biometrics. Biometric technologies currently in use include fingerprint imaging, hand geometry, iris scanning, voice biometrics, and facial recognition technology. Other biometric technologies in development for commercial use include gait and vein recognition, signature and keystroke technologies, and the use of DNA for the purposes of identification or verification, and the industry has endorsed a number of futuristic technologies, from olfactory recognition to recognition based on thought patterns (brain mapping).

Biometric Objectivity

As conversations on whether the state is racially profiling particular communities shift to whether racial profiling is an important and necessary state practice (Bahdi 2003), biometric discourse admits that racial profiling is occurring, but suggests that it can be overcome using new technologies. For example, media and scientific reports regularly depict biometrics as able to circumvent discrimination. One industry representative, Frances Zelazny, director of corporate communications for Visionics (a leading U.S. manufacturer of biometric systems), asserts that the corporation's newly patented iris-scanning technology "is neutral to race and color, as it is based on facial features recognized by the software" (Olsen 2002). Zelazny suggested that biometric technologies' impartiality helps to protect the privacy of individual citizens: "That the system is blind to race is a privacy enhancing measure" (Olsen 2002).

Similarly Bill Todd, a Tampa police detective, praised the closed circuit television (CCTV) system installed in Ybor City, Florida, specifically because, in his view, it is unable to discriminate on the basis of racial, ethnic, or gendered identity. Ybor City's biometric system uses facial recognition technology to identify suspects by using smart CCTV cameras installed in the city's Latin Quarter. Todd asserted that one of the primary benefits of installing the thirty-five cameras was that, unlike the police force itself, smart CCTV cameras are "race and gender neutral" (Lewine 2001). The allegedly arbitrary choice of this particular test site for facial recognition technology is problematized by Kelly Gates (2004), who highlights the tensions associated with choosing this neighborhood to test out a new biometric profiling technology given its raced and classed identity.

Claims of biometric neutrality are codified in book-length technical briefs (Pugliese 2005; Murray 2009). In their guide to the development and use of biometrics for businesses, John Woodward, Nicholas Orlans, and Peter Higgins (2003) argue that biometric technologies are beneficial because of their lack of bias: "The technological impartiality of facial recognition overs a significant benefit for society. While humans are adept at recognizing facial features, we also have prejudices and pre-conceptions." Woodward and his collaborators over contemporary controversies concerning "racial profiling [as] a leading example." Contrasting human recognition with biometric recognition, they argue that facial recognition technology is incapable of profiling based on identity because "facial recognition systems do not focus on a person's skin color, hairstyle, or manner of dress, and they do not rely on racial stereotypes. On the contrary, a typical system uses objectively measurable facial features, such as the distances and angles between geometric points on the face, to recognize a specific individual. With biometrics, human recognition can become relatively more 'human-free' therefore free from many human flaws" (254).

Biometrics companies sell their technologies on the strength of these claims. For example, AcSys, a Canadian company dedicated to developing biometric facial recognition technology, promotes their product as "completely race independent—eliminating risk of racial profiling" (AcSys Biometrics Corp. 2007). Even if companies acknowledge that one type of biometrics, such as finger imaging, is not "race neutral," they suggest that a diverent biometric technology is the solution to this problem. For example, Doug Carlisle is a board member at a4Vision, a biometric company dedicated to developing 3d facial-imaging technology. He asserts that the bias associated with biometric fingerprinting can be fixed by using facial recognition technology: "Fingerprint technology ... is difficult to implement due to physiological diverences across varying ethnic groups as well as some cultural prejudices that resist fingerprinting. Using the shape of one's face and head is less invasive, more accurate and the most promising going forward for identification purposes" (Sheahan 2004).

Assumptions concerning the ability of biometrics to work with mechanical objectivity or within frameworks of "knowledge engineering" (Lynch and Rodgers n.d.), in which scanners eliminate subjective human judgment and discrimination, have made their way from media and scientific reporting into commonsense assertions about the neutrality of biometric technologies. Thus in an online discussion on the use of iris scanners at the U.S.-Canada border, one discussant claimed he would prefer "race-neutral" biometric technologies to racist customs border officials: "If I was a member of one of the oft-profiled minorities, I'd sign up for sure. Upside—you can walk right past the bonehead looking for the bomb under your shirt just because of your tan and beard.... In short, I'd rather leave it up to a device that can distinguish my iris from a terrorist's, than some bigoted lout who can't distinguish my skin, clothing or accent from same" ("Airport Starts Using Iris Screener" 2005). Here we see what Sherene Razack (2008) terms "race thinking," that is, that the state can justify the surveillance and suspension of the rights of othered communities in the interests of national security. The belief is that new technologies will circumvent forms of "race thinking" and racial profiling by replacing the subjective human gaze with the objective gaze of the state.

Institutions investing in biometric technologies use these assumptions about the technical neutrality of these new identification technologies to justify their adoption. These claims are interesting in a number of ways, not least in that they reveal tacit assumptions around race and gender discrimination by those who might not ordinarily acknowledge prejudice in the systems of which they are a part. It is relatively rare to hear a police officer argue for new technologies because his own task force is prone to racist and sexist assumptions, as did Bill Todd of Ybor City (Lewine 2001). Institutional players adopting biometrics represent them as the answer to institutional forms of discrimination and inequality such as racial profiling. And yet biometrics have failed, calling into question industry and scientific assertions about the objectivity of these new identification technologies.

Biometrics Unraveling

At the unveiling of a new iris scanner at Edmonton International Airport, Deputy Prime Minister Anne McLellan moonlighted as public relations manager for the biometric industry. McLellan demonstrated the operation of a new scanner designed to showcase the accuracy and efficiency of these new identification technologies and to promote CANPASS, a program that replaces border personnel with biometric scanners. As we will see in greater detail in chapter 4, CANPASS IS A NEXUS-Air affiliate border-crossing program designed to speed "pre-approved, low-risk air travelers" across the border. Meant to showcase the efficiency of replacing subjective humans with objective machines, the demonstration did not go as planned: "McLellan stared into the scanning machine ... but twice a computerized voice declared: 'Iris scan unsuccessful'" ("Canada's Edmonton Airport Implements Iris-Recognition Screener at Immigration" 2011).

(Continues...)



Excerpted from WHEN Biometrics FAIL by Shoshana Amielle Magnet Copyright © 2011 by Duke University Press. Excerpted by permission of Duke University Press. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Acknowledgments ix

Introduction: Imagining biometric security 1

1 Biometric failure 19

2 l-Tech and the beginnings of biometrics 51

3 Criminalizing poverty: adding biometrics to welfare 69

4 Biometrics at the border 91

5 Representing biometrics 127

conclusion: Biometric failure and beyond 149

Appendix 159

Notes 165

Bibliography 171

Index 199

From the B&N Reads Blog

Customer Reviews