Identity technology (ID-tech) is the complex of devices and techniques
by which the identity of individuals is established and/or verified. It
largely consists of biometric systems, that is, automated technical
systems that measure physical human characteristics, some of them
dynamically and in real time. The biometric device matches the input
sample against a stored template, in order to include or exclude an
individual from some action or activity. It is used for verifying who
you are (with smart card, username or ID number) or identifying who you
are. The data so collected could be used for purposes other than those
initially intended.

Fingerprint biometrics were first used at
the 2004 Olympic Summer Games, Athens. In the USA, Australia, UK, EU
and other countries biometrics are being introduced into passport and
visa control. For example, citizens of Brazil have their signature,
photo, and 10 rolled fingerprints collected by passport requests. There
is a very wide variety of uses e.g. in immigration, customs, ATMs,
retail, schools, policing, and intelligence.

While ID-Tech has
many uses and conveniences it poses risks to privacy, and most
significantly is a technology that could lend itself to government
tracking and profiling of individuals on a wider than acceptable scale.
In a nutshell the convergence and synchronising of of ID-tech
capabilities lends itself to the potential for a ‘Panopticon State’,
one that has the policing powers to profile any citizen almost
continuously and simultaneously in several dimensions of life, anywhere
on the globe.

Both physiological and behavioural traits can be
measured and recorded by biometrics systems. The former include
fingerprinting, face identity, facial thermogram, hand and footprints,
iris, retina, ear canal, DNA, and even personal odour and scent. The
latter include computer keystroke dynamics, signature and writing,
speech, voice (speaker), and gait. We should also note the potential of
RFID (radio frequency identification) implants and body scans.

The benefits of biometric systems

systems have benefits in the prevention and reduction of crime
generally, especially fraud and impersonation, and terrorism. They may
also help to solve crime, including ‘cold cases’, and stop the evasion
of arrest. It is often claimed, and may be true in many instances, that
such systems make for an efficient use of resources (creating new
demands, however). In the Super Bowl event of 2001 Florida police used
the facial recognition software FaceIt to search the crowd for
criminals, and found 19 people on arrest warrants. In the case of the
disappearance of Madeleine McCann (2007), the UK police asked visitors
at the Resort in Portugal in the two weeks prior to child’s
disappearance to provide any photographs of passers-by for use in a
biometric facial recognition system. Since 2001 a retinal system has
helped apprehend thousands of persons re-entering the wealthy UAE with
fraudulent travel documents.

How reliable are they?

are many issues of technical reliability, and these will raise worries
about misidentification. A biometric identification system is expected
to be universally applicable, whereas some individuals may not qualify
e.g. missing limbs, burns, loss of organ, injury-related changes to
gait, and cataract. They must be capable of unique identification,
whereas there is always some (very small) margin of fuzziness,
especially with family relatives and twins. They should be resistant to
the ageing of the individual; but faces etc. change with age, illness,
and injury and cosmetic surgery.  There is also the problem of ‘data
collection’ being affected by overload and noise, e.g. in a crowd. The
efficiency and effectiveness may be in doubt because there will be
thresholds of definition (eg, a face at a distance), too slow a
response of the device, poor light, and software deficiencies.
Biometric data will ‘ideally’ be correlatable with other individual
data, whereas these may not be available or be compatible. There are
also issues of standardisation and interoperability.

With all
these difficulties, and the inevitable dose of human incompetence, one
may give a sigh of relief for the future of individual freedom and
privacy. However, great efforts and resources are being put into
resolving them. Ultimately, developers of such technologies know that
their techniques must be socially acceptable, whereas public may
reject. We have recently seen that there have been human rights
concerns about airport body scans (admittedly, a detection technology
rather than an ID one).

The Hydra Effect

any case, history has shown that technologies will be implemented,
sometimes widely, even when there are known difficulties (as well as
difficulties that emerge in practice). In this case a fundamental issue
is that the identity of the ‘target’ person may be compromised. There
is the impersonation issue: the system depends on the individual who is
the subject of the test being correctly identified at original
enrolment. If a biometric profile is stored for person ‘A’ then that
data becomes definitive even if this person is not in fact A. This is
fundamental, and has little to do with how sophisticated the technology
is, and yet there is a tendency in some quarters to assume that the
technology cannot be wrong. But if the ‘input’ is wrong, then the
technology will simply process it efficiently.

There are least
another two fundamental problems. Firstly, there is the possibility of
someone using as a test input what is in fact a hacked copy of the
stored template. (Some suggest a way around this is to technically
exclude any absolutely ‘perfect match’.) Secondly, an ID device does
not ‘know’ what it is looking at. For example, face
recognition systems are fooled with a high-quality photograph of a face
instead of a real face, so are unsuitable for unsupervised applications
such as door access. There is a similar problem with fingerprints and
iris patterns.

are genuine concerns about the security of storage of biometric data. 
It should be obvious, but is often forgotten, that a security system is
only as trustworthy as the people operating it, from low level
operatives to high level authorities. Malicious verifiers may wish to
steal templates from the database (although it has been suggested this
could discouraged with ‘reverse engineering’ technique). Then there is
the possibility of the ‘secondary use’ of biometric data: a user who
accesses two systems with the same fingerprint may allow another person
to ‘impersonate’ him. Most of these problems, evidently, have to do
with human not technological weakness. Technology does not make people

You may think that internal hacking is unlikely. Yet, to
give one example, in 2007 tens of millions of credit card users were
put at risk by financial-transactions company Heartland Payment Systems
(USA) when malicious software was installed inside the system.

dependency on such systems grows then permanent identity loss is not
impossible. A system must retain the uniqueness of the trait template
unchanged (changed within narrow range), over the lifetime of the
individual. This ‘life-time’ property brings a risk. If biometric data
obtained by unauthorized users (eg, compromised from a database) then
the owner loses control over the data and loses his identity. Lost
passwords can be changed, but e.g. if someone’s face is compromised
from a database, they cannot cancel it or reissue it. A proposed
solution is the ‘cancellable biometrics’ technique which distorts the
biometric image before matching. But for every solution there is
another problem. A criminal employee could undistort the template with
knowledge of the distortion key. If we distrust the employees
sufficiently to require a distortion key, why would we trust them with
the distortion key?

There is what I call a ‘Hydra Effect’ in
technology. In Greek mythology whenever the Hydra beast was decapitated
it grew two more heads. Similarly, every technical solution creates at
least one more problem, which is often trickier to solve. A technical
solution is eventually found at great cost, and then more problems
appear. There may well be diminishing returns on the resources being
put into this ceaseless round of technical innovations that ultimately
cannot overcome the fundamental issue of human weakness and failure.

Can we preserve our privacy?

may take privacy to be the state of being free from unsanctioned
intrusion into one’s personal life. It is a value that is embodied in
human rights, national laws and diverse regulations. ID-technology
gives rise to the possibility of the misuse (real or perceived) of
personal biometric information for gainful intrusion. Examples of known
misuses are surveillance videos of vehicle licence plates being used to
record license plates to blackmail people, to stalk women and to track
estranged spouses. In some cases it has been police officers who have
been guilty of these offences.

Fingerprint recognition for the
ignition of your car might seem like the latest desirable innovation in
hi-tech protection. But one may forget the human factor. In 2005
Malaysian car thieves cut off the finger of the driver of a Mercedes
S-Class car so that they could steal his car. If he had not had a
sophisticated biometric device in the ignition he would at least still
have his finger. In the USA and EU some fear that biometric information
can be ‘skimmed’ and sold to criminals to identify individuals for
ransom-kidnapping and the like. In even worse scenarios a racist or
totalitarian government ( Hitler, Pol Pot, etc.) could use data to
determine unwanted traits in humans for population control

The Panopticon state?

future scenario that does not receive enough serious attention is the
convergence of different ID-technologies into one (more or less)
interconnected system. Intelligence services world-wide are well on
their way. We could already be witnessing an information cascade, held
back only by lack of harmonisation, human incompetence and poor
communications. Public protest is not yet a major hindrance.

utilitarian philosopher Jeremy Bentham conceived a plan in 1791 for a
new kind of prison, the Panopticon, the novelty of which was that any
prison could be seen from anywhere at any time. A variety of modern
technologies, including those based on biometrics, may be converging
towards the possibility of a Panopticon State, in which any citizen can
be tracked and a life-profile composed without their ever knowing. Body
scans, bank details, credit card trails, Google, RFID, fingerprints,
face and iris, recognition, GPS, health records, mobile phone use, bus
and train cameras, spy satellites, street cameras, wire taps and now
body scans could, in theory, be brought together in various
configurations. Perhaps only the political will stands in the way.

Biometric information may be shared or different databases may be networked, eg, telebiometric
systems join biometrics with telecommunications. There is the
possibility of tracking individuals. For example, security cameras can be linked to a
facial recognition system or a public transport system using biometry.
At the moment, in most cases the information from different sensors
generate differently encrypted outcomes so cannot be compared, but this
can be overcome. The unification of different biometric outcomes by
means of data exposure or through global or regional standardisation is
not impossible. Already there are some public concerns about ‘leakage’
of fingerprint data from schools to health, insurance and other
agencies with a discriminatory effect on access to services.

Ken MacDonald QC,  the UK’s Director of Public Prosecutions (2003-08) has said,
“We need to take very great care not to fall into a way of life in
which freedom’s back is broken by the relentless pressure of a security
State.” Richard Thomas, the Information Commissioner is reported as
saying “My anxiety is that we don’t sleepwalk into a surveillance
society”. He was thinking mainly of the UK’s National Identity Scheme. These two people are hardly radicals, and know ‘from the inside’ what they are talking about.

may think the main issue is National ID cards, but they have a lesser
role than the database they are linked to, i.e. the National Identity
Register.  A new law specifies 50 categories of information that the
Register can hold on each citizen, including up to 10 fingerprints,
digitised facial scan and iris scan, current and past UK and overseas
places of residence, throughout their lives and with indices to other
Government databases which would allow them to be connected into a
personal profile. The legislation also says that any further
information can be added. The amount of data which can be recorded on
the scheme’s Register is unlimited. Still, the good news is that
fingerprints are not yet being taken, and plans to take iris scans have
been dropped, although not ruled out.

This is not the place to
go into the detail of the scheme but the Home Office forecasts that 265
government departments and as many as 48,000 accredited private sector
organisations would have access to the database, and that 163 million
identity verifications or more may take place each year. The cost of
the scheme is variously put at between 5 and 15 billion pounds over 10

Naturally, the Commission for Racial Equality and
ethnic/religious minorities are expressing concerns about
discrimination. If one thinks this is far-fetched or alarmist one
should recall that in the USA not so long ago the FBI head J. Edgar
Hoover (and his vast fingerprint records) pursued not only  criminals,
but people he chose to classify as “security risks,” “subversives,”
“agitators,” “deviants,” “black nationalists,” and “peaceniks.”

Provisions for consent to biometric schemes

acceptance of the national ID scheme has been mixed and controversial
(but not controversial enough), with diminishing support after reports
of the loss of  millions of items of public service information  in
several quarters (See the NGO called “NO2ID”). Meanwhile, some UK parents have been protesting school fingerprinting since 2001. These
are used for purposes of registration, truancy control,  parental
payments, replacements of library or meal cards, and possibly for exam

sometimes take a more colourful form. The Chaos Computer Club of
hackers published a fingerprint of the German Minister of the Interior,
Wolfgang Schäuble, in its magazine Datenschleuder (March 2008). The
magazine included the fingerprint on a film for readers to give them
access to whatever the Minister had access to. If they can do it,
criminals can do it, and undemocratic governments can do it.

particular focus for protest in the UK has been school fingerprinting
without consent. One surprising facet of this is that the Data
Protection Act does not explicitly require schools to gain consent. The
Act is, apparently, about information, not images. More research also
needs to be given to how the Human Rights Act and the Freedom of
Information Act relate to the storage and transmission of ‘data’ which
is perhaps not ‘information’ in the sense of text. A democratic future
depends on asking many questions that are currently not even being
conceived, let alone asked.

Professor Geoffrey Hunt teaches at
St Mary’s University College
in London. This
article by Professor Hunt was originally published on the website of
, a think-tank focusing on emerging technologies and their ethical,
social and political implications.