Biometrics
Biometrics is the automised recognition and identification of humans. Biometric technologies include fingerprint, face, iris, gait, and palm print recognition. In the future, we will probably see more widespread use of DNA, and behavioural and multi-biometric applications (Jain and Flynn 2008). Biometric systems can be used in a wide range of areas: in the health service, information systems, the transport sector and the workplace, for logging on to PCs, for controlling access to secure areas, for authenticating transactions and for regulating the global flow of migrants. Many of the most controversial questions linked to biometrics relate to how the technology makes the human body digitally readable (van der Ploeg 1999).
- Privacy
- Exclusion of vulnerable groups
- Trust
- Technical complexity and uncertainty
- An example
- Summary: ethical challenges in research
Digitisation incorporates the body in the information network: individuality and subjectivity become digital objects that can be monitored, exchanged and used by authorities and commercial actors. Problems may arise when the information is used for purposes outside the individual's sphere of knowledge and influence. This is particularly serious in cases in which the information has a "retroactive impact", i.e. when it has direct consequences for the person in question. There are also serious concerns related to ongoing research both in Norway and internationally, aimed at developing methods and algorithms that can recognise and predict suspicious behaviour and criminal intentions.
Describing biometrics as a "security technology" unavoidably brings a range of possible impacts to mind, depending on the context in focus. In essence the technology is intended to improve processes normally known as authentication. Biometrics is frequently said to replace authentication based on "something one has" (ID card, passport, etc.) or "something one knows" (a password) with "something one is" (the body). The use of different bodily characteristics is linked more closely to a particular status: member of the health service (health card, national ID card), security-cleared user of a system or building, or citizen of a country (passport, national ID card). In this respect biometric methods can simplify a number of tasks, many of which are associated with the state's ability to identify individuals in an increasingly information-intensive and changing society, but commercial applications are also spreading rapidly.
The concept of "security" becomes more challenging when we acknowledge that the "dual use" of biometrics gives rise to a set of problematic issues. "Dual use" means that a particular technology can be used for peaceful (civil), destructive (terrorism), military or security-related tasks ( http://en.wikipedia.org/wiki/Dual_use). Authentication entails recognising the individual on the basis of one set of data previously used to register that individual on the biometric system. But information systems can also be used to identify persons based on searches in several databases (Kent and Millet 2003). Identification will thus become more akin to surveillance. The coupling of databases for different purposes such as visas and residence permits, asylum and immigration, criminal registers and national population registers is a stated goal both nationally and internationally (European Commission 2005). Such measures are often justified by referring to the need to combat terrorism and international crime. The result is that the police, the security service and the security industry have encroached upon areas to which they have not traditionally belonged. In this connection we can highlight another characteristic of the technology, excellently portrayed in the film Gattaca, i.e. the increasing ability to identify individuals in a number of contexts. Individuals leave biometric traces such as fingerprints, photos and DNA in a variety of situations and in different places. Our movements and actions become more and more visible as the technology is improved and systems are made inter-operative.
Currently there have been few or no attempts to develop an overarching framework for a thorough understanding of the ethical and social implications of biometrics. The following account therefore simply lists the problem areas. These have mainly been chosen because they have received attention in the public debate or in the academic literature. This selection is inevitably coloured by the cultural context. In the case of biometrics in Western countries the technology has been inscribed in a conceptual world characterised by the struggle against terrorism and the security policy situation following 11 September 2001. In this respect different considerations predominate than for example when the world's largest biometric system was introduced in India. In the Indian context, biometrics is connected to the conducting of a national census, and the tone of the debate is more positive than in the West: biometrics can help to give people rights, for example rights for illiterate people and for large numbers of unregistered individuals. Citizens must be made visible so that the state can give them their rights and distribute benefits. In contrast, the debate in Western countries is more closely focused on biometrics as a means to divest citizens of their rights, for example the right to privacy and free movement.
Privacy
Most attempts to regulate biometric systems relate to privacy (Woodward 2008). In the European context such attempts primarily relate to the EU's Data Protection Directive. In addition there is a variety of legislation on data retention which will not be discussed in this article. The Data Protection Directive is from 1995 and is currently in the process of being updated. A weakness of the old regulations was that they only applied to the processing of personal data in commercial contexts, which excluded data processing in security- related and criminal cases. Even though the directive is outdated, it is probably one of the best globally, and also appears to embody the necessary principles for a sound regulation of biometric data. Essentially the directive states that personal data (biometric or other) shall not be processed without establishing that the data subject has been explicitly informed about such processing (Art. 7), that it has a clearly defined purpose (Art. 6b), and that it is reasonably "proportionate" to this purpose. Proportionality means that the purpose of collecting the data is sufficiently important to justify the retention and use of material that identifies persons (Art. 6).
Even though the regulations are basically good and appear to provide clear guidelines, practical problems may arise. Many of these seem to be of a technological and political nature, but they have a strong impact on possible legal and ethical regulations. For example, article 7a) of the directive states that personal data can only be processed if the data subject has unambiguously given his consent. The implementation of the new European Visa Information System (VIS), which introduces biometric data in visa procedures for all member countries, commenced in 2010. The "Community Code for Visa" (Regulation (EC) No 810/2009) sets out the rules for the collection of data which is to be stored in a central database. The applicant must consent to the use by relevant authorities of the data collected. This includes visa, immigration and asylum authorities and under certain circumstances so-called "designated authorities". This refers to authorities such as the police, the security service and Europol. If you do not provide biometric data, you will not be given a visa. Therefore the alternative is so serious that it is questionable whether you are given a real choice. Moreover, the powers given are broad and unspecified: who is the "designated authority", and under what circumstances are they given access? Will data be exchanged with the US authorities or other countries' authorities?
This illustrates a general problem regarding privacy: it is a contextually conditioned and "subjective" value (Solove 2008). It can be wholly or partly set aside by balancing it against other considerations (such as "security"). In large and highly complex socio-technical systems, it takes time for society to grasp the social, ethical and legal implications. Some people claim, therefore, that the strong emphasis on "security" after 11 September 2001, as well as the swift introduction of biometrics in passports, travel documents and visas has already swept aside most privacy considerations (Bunyan 2009). Often it is unclear not only for citizens but also for activists, researchers or lawyers, just what databases are or may be linked together. Data networks and computers appear to have an inherent tendency to interconnect, and such tendencies are further encouraged through policy concepts such as "increased operability", the "principle of availability", and increased collaboration in the prevention of crime and terrorism (Lodge 2006).
The development of new biometric technologies comes in addition, since the introduction of new forms of surveillance can endanger the concept of privacy. This applies, for example, to improved algorithms for face recognition in large crowds and software programs intended to identify suspicious behaviour or "criminal intentions" in public places. When the individual's biometric data (and behaviour) are read at a distance, it is doubtful whether data protection legislation gives adequate guidelines and principles.
Despite such reservations, data protection clearly represents the strongest alternative for regulating biometrics. The EU's Data Protection Directive is currently being updated (2011) in respect of the challenges presented by new technologies, and privacy has been strengthened under the Lisbon Treaty in relation to crime prevention and security services. Nonetheless, there is a great need for international harmonisation of the regulations, and for the development of a broader understanding of data protection that incorporates these new challenges.
Exclusion of vulnerable groups
As previously mentioned, both biometrics and privacy regulation fundamentally target individuals. However, this focus may entail that the biggest dangers arising from the new technology are invisible. Large socio-technical systems do not primarily appear at the individual level; they target society as a whole and all population groups. Starting with the most tangible problem: not everyone is equally easy to register. Like most other technologies, standardisation is important in biometric systems. But there will always be people who do not fit into the standard format. People with disabilities may have difficulty accessing the biometric scanner. Some people lack certain characteristics or body parts (for example, craftsmen whose fingerprints have been abraded as a result of their work) (Goldstein et al. 2008). Also biometric systems seem to be better able to read bodily features in white people (Coventry 2005). All the same, these are obvious problems that have relatively simple technical solutions. There are greater challenges when vaguely articulated or hidden prejudices against particular groups are built into systems for classifying persons, and become automised. Here biometrics joins a wide range of practices and technologies directed at interventions, and the mapping and monitoring of populations, groups and individuals.
There is little doubt that the inclusion of individuals in public registers has played a key role in the growth of today's welfare society. National insurance, taxation and epidemiological registers are based on the standardisation and quantification of populations as a basis for collecting information, planning, interventions and allocating resources. By the same token history tells us that such systems, particularly under totalitarian regimes or in war or states of emergency, can be used for evil purposes. IBM developed its first punch card system in cooperation with Nazi Germany as a device enabling more efficient mapping of Jews; categorisation systems developed by Belgium as a colonial power were used during the massacre of Tutsis in Rwanda in 1994 (Lyon 2009). In Norway, the so-called "J-stamp" in Jewish passports was used to identify and deport Jews to concentration camps during WW2. These are extreme examples but they also illustrate more "unobtrusive" dangers linked to categorisation, stigmatisation and exclusion of entire populations. Many large IT systems in the health, administrative and security sectors function primarily at group level: as a rule individuals are recognised as members of one or another category, whether as a "rights holder" or as a "threat to public order". There is little doubt that in the period following 11 September Arabs and Muslims have been subject to extensive use of "profiling", such as in the German Rasterfahndung (grid-search) programme (LIBE Committee 2009), and that the use of so-called "watch lists" has become more common. The inclusion of such functions in biometric systems, for example at airports or train stations, increases the risk of systematic discrimination of specific groups. Those who have ended up on the wrong list can be discriminated against for years after their names have been deleted from the original register. When information is dispersed across numerous systems and databases it is difficult if not impossible to know whether all traces have been deleted. We can also mention a new addition to the biometric spectrum: so-called "soft biometrics", which makes use of non-unique characteristics such as height, ethnicity and gender in order to identify individuals more accurately. While technology developers and the authorities view this as a field with considerable growth potential (Ross et al. 2008), it represents an ethical and political landscape that can prove to directly conflict with fundamental human rights (van der Ploeg 2010).
Trust
Trust is essential in every society. If there are no expectations that the government, fellow citizens and socio-technical systems function fairly reliably and predictably, key social functions will decay (Grimen 2009). Large biometric systems such as those used to regulate freedom of movement in the Schengen area displace and alter relationships of trust. Firstly these systems can affect the relationship between the citizen and the state. Activist organisations such as Statewatch have long claimed that biometric systems and other security technologies are helping to shift the power balance in Western society towards more authoritarian structures. Biometrics, extensive use of surveillance cameras, passenger lists, data retention and data mining (i.e. systematic searches for patterns in large data sets or databases) enhance the state's ability to "see" its citizens – where they are and what they are doing. Many similar technologies have been introduced in less transparent ways, often via international organisations outside parliamentary control. Such tendencies seem to signal an attitude where citizens are asked to trust that the state acts in their best interests, but mechanisms for popular, legal and parliamentary control are not implemented accordingly. On the other hand, the focus on privacy and the "digital tsunami" we are experiencing seems to be giving rise to increased concern, not least in various courts of law. In general, European countries appear to be adopting a more responsible attitude to data protection than is the case in the USA. Disagreements about passenger lists, the Data Retention Directive and reactions from a number of countries regarding Google's information gathering from unsecured data networks are examples.
Relationships of trust are being changed in other ways. Decision-making powers are being transferred to authorities in other countries by means of inter-operative biometric information systems, for example through the option of issuing a European arrest warrant. Decisions made in one country's jurisdiction will have to be enforced by authorities and jurisdictions in other countries. This will give rise to considerable challenges linked to the coordination of different practices, since discretionary assessment, operative culture and codes for the exercise of power vary from country to country. Similar problems arise when issuing visas or rejecting asylum applications; in the new systems the decisions made will apply in all the EU member countries.
Technical complexity and uncertainty
This leads us to a final, crucial question. With the rapid spread of biometric systems can we trust that the technology functions as foreseen? Large biometric systems are now being introduced in a number of areas, often for entire nation states or for all Schengen countries. (Geyer 2008). Yet in fact there are no tests available that show that the systems will function on this grand scale. The only way to test this appears to be to introduce the systems on a large scale and then evaluate the consequences. We know that biometric systems undoubtedly make errors – recognition is based on statistical associations and not absolute identity. Upscaling and use in surroundings that are difficult to control (public spaces) increase the number of possible sources of error dramatically. For example, a biometric system tested at Manchester Airport allowed persons with only a 30% likeness to their own passport photo to pass through (Gardham 2009).
An example
In 2009, the Dutch parliament introduced biometric passports by law (Hudig 2009). However, the final proposal went further than Council Regulation (EC) No. 2252/2004, which stipulates that member countries must introduce passports with an RFID chip for the retention of fingerprints and a facial image (Norway is included through the Schengen agreement). The Dutch proposal introduced a centralised database that was to be linked to criminal registers and used in the fight against terrorism. It has proved to be very difficult to withdraw from the biometric database: Aaron Boudewijn, a 24-year old law student, took legal action against the Dutch government after he had been denied a passport. The reason was that he had refused to register on the database. A number of other important rights were also linked to the willingness to be registered, such as being able to vote in elections. Biometric passports have become a controversial issue and a source of conflict between privacy protection organisations, the court system and the Dutch authorities. To the best of our knowledge, there has been no corresponding debate in Norway. A search of the police's webpages on passports shows disappointing results; security and efficiency aspects are given prominence while privacy, information about biometric systems and consent are not mentioned. Why is this a problem? While biometric data are being introduced in a number of documents, the architecture for systems such as VIS and the Schengen Information System (SIS II) are still in the process of preparation. Biometric scanners and systems will gradually be linked to travel documents and identification papers, with clear implications for privacy, justice and freedom of movement. This apparently continues a tendency that is known from other areas where large databases such as biobanks and health registers are implemented. The tendency is to depart from the use of informed consent with an opt-out (biobanks), via assumed consent with an opt-out (health registers) to no opt-out possibility combined with a lack of consent (biometric passports). This development demands a broad public debate.
Summary: challenges for research ethics
As mentioned in the introduction, systematic overviews of the social and ethical implications of biometrics are still lacking. One problem arising in this respect is that biometrics does not have a fixed area of application, but is rather a generic technology that facilitates a range of technological applications and practices. The list of "problematic issues" above is not conclusive but is based on a selection (using both ethical and sociological methods in the EU-funded project Technolife: see www.technolife.no). In summary we can distinguish between "restricted" and "broad" ethical problems: the "restricted" problems have more or less a technically definable solution (particularly in respect of law and ethics, but also through new technology that is implemented to strengthen privacy) while the "broad" problems are difficult to define from a scientific, ethical and political standpoint. I would emphasise that the classification does not represent a strong normative ranking on the part of the author: both the restricted and broad problematic issues are important and should as far as possible complement each other.
"Restricted problems": important questions linked to understanding, defining and further developing ethical and legal principles (consent, personal privacy, fundamental rights) in new (digital and social) contexts. There is considerable need of enhanced understanding of the concept of privacy, not least when encountering the police and security services' application of the new technology. As regards informed consent there seems to be an even greater need: it is doubtful whether this concept can be appropriately defined in the encounter with large information systems. Often both privacy and consent are set aside with reference to collective goods such as "security" and "societal value". New forms of collective consent are needed that can balance such arguments and principles.
"Broad problems": based on both sociological and ethical considerations it is vital to acquire a greater understanding of how biometric information systems impact on social relationships. The potential of excluding weak and vulnerable groups is of particular importance. Gaining a wider comprehension of their impact on fundamental power and trust relationships and how these may be shifted via biometric systems are also important aspects. Based on scientific and epistemological perspectives, mapping new technologies is vital in order to chart and understand human identity, but also (to a growing extent) action and intention. There is a great danger of overestimating scientific and technical security at the expense of more realistic models for human and technological capacity and "agency", particularly when there is strong political pressure.
Finally, I would contend that all the above-mentioned problematic issues point to a more overarching political and democratic level: how can we increase attention to and discussion of information systems with a clear dual use set of problems? Digital technologies in general have great potential to improve a range of social, technological and political conditions. But they also have a tendency to glide unseen into everyday relationships, matters and actions without our noticing how fundamental institutions and principles are being challenged and changed.
This article has been translated from Norwegian by Jennifer Follestad, Akasie språktjenester AS.