Have a Biometric ID System Coming Your Way? Key Questions to Ask and the Arguments to Make

Long Read
Photo By: Cpl. Joel Abshier

Photo By: Cpl. Joel Abshier
 

‘Biometrics’ describes the physiological and behavioural characteristics of individuals. This could be fingerprints, voice, face, retina and iris patterns, hand geometry, gait or DNA profiles. Because biometric data is particularly sensitive and revealing of individual’s characteristics and identity, it can be applied in a massive number of ways – and has the potential to be gravely abused.

Identification systems across the world increasingly rely on biometric data. Not only do these pose a grave threat for people’s privacy and freedom, they also present severe threats to people’s and society’s security because of their vulnerability to being breached.

Yet, despite these, international institutions and countries are pushing biometric ID systems to countries around the world; these systems are now being rolled out as a solution to a whole range of things, from providing refugees with aid, to registering people to vote, and for providing everyday services to billions of people. 

But not all identification systems are the same, and they pose different threats depending on their architecture and application. Below, Privacy International outlines some of the big things you need to consider when it comes to understanding any biometric identification system – and some of the questions which their proponents need to first answer.

These considerations can be used by the anyone, but may be particularly useful for activists, journalists, policy-makers, administrators, and lawyers. 

Is it Even Needed?

Identities are extremely important: ensuring that everyone has a “legal identity” including having their birth registered is even one of the aims of the United Nations’ Sustainable Development Goals – the modern cornerstone of international efforts to eradicate poverty. But a smokescreen has been thrown up surrounding this term; it is used to justify any identity system, including biometric ID systems, when simpler and less invasive systems work just as well.

Security or crime-prevention concerns are also frequently given as a motivation for states to introduce biometric identity schemes for their populations; these concerns are often presented in the abstract. They can lead to a ‘security’ argument being used even when there is actually little or no security advantage, for example when it comes to biometric SIM card registration.

Policy makers and donors around the world are prone to suggesting high-tech solutions to a whole host of challenges when simpler ones would be more effective. Instead, they should first answer:

  • What is the problem that the digital identity system is designed to solve; what evidence is there for the extent of the problem; and why would digital identity be a solution to that problem?
  • What alternative solutions are possible? Are they less privacy invasive?

Who will actually benefit?

Any lucrative government contracts shielded from public scrutiny are prone to massive levels of waste or corruption. Biometric systems are sold by a powerful industry, often with closed ties with governments – and it is far from transparent. For example, a leading player was banned from World Bank contracts for “corrupt and collusive practices” in Bangladesh. Establishing the key companies and domestic actors involved – and their links – is therefore key.

Another thing to establish is whether the system is aimed at actually serving domestic needs – or if it is actually designed to serve the interests of foreign powers. For example, the United States’ Biometric Identification Transnational Migration Alert Program (BITMAP) provides 14 countries with biometrics systems - despite failing to require adequate privacy protections. The collected data is then shared with US biometric databases, including a new system known as HART developed by arms company Northrop Grumman, which according to a DHS presentation seen by Privacy International will scoop up a whopping 180 million new biometric transactions per year by 2022.

Similarly, European countries are spending billions of Euros transferring surveillance and border control capabilities to foreign countries to ensure they stop people migrating to their countries. For example, the European Union’s Trust Fund for Africa provided €28 million to develop a universal nationwide biometric ID system in Senegal by funding a central biometric identity database, the enrolment of citizens, and the interior ministry in charge of the system.

Is there an adequate national legal framework?

Processing of biometric data, including collection, analysis, storing, sharing, must be prescribed by law and limited to that strictly and demonstrably necessary to achieve a legitimate aim. That law must be accessible to the public and sufficiently clear and precise to enable persons to foresee its application and the extent of the intrusion with someone’s privacy.

Data protection law is a necessary but not sufficient safeguard against abuse. As of January 2019, over 120 countries around the world have enacted comprehensive data protection legislation. The most comprehensive data protection regulation in the world, the European Union General Data Protection Regulation (GDPR), treats biometric data used for identification purposes as “special category data”, meaning it is considered more sensitive and in need of more protection.

However, many of these laws contain significant exemptions. Many do not apply to processing of data by intelligence agencies and law enforcement, and even when they do, they contain wide reaching exemptions for purposes such national security and prevention or investigation of crime. Even country with modern data protection legislation, such as the United Kingdom of Great Britain and Northern Ireland, do not adequately regulate the processing of biometric data, such as the use of facial recognition technology by the police in public places.

Privacy International believes that in most countries national laws do not adequately regulate the use and sharing of biometric data. They fall short of applicable international human rights law and they fail to effectively address the security risks arising from misuse of biometric data, especially at scale.
 

Has there been a necessity and proportionality assessment?

Under international law, any interference with the right to privacy needs to comply with the principles of necessity and proportionality.

The use of biometrics presents a unique set of concerns. These are neatly summarised in the UN High Commissioner for Human Rights report on the right to privacy in the digital age, as biometric

“data is particularly sensitive, as it is by definition inseparably linked to a particular person and that person’s life, and has the potential to be gravely abused. For example, identity theft on the basis of biometrics is extremely difficult to remedy and may seriously affect an individual’s rights. Moreover, biometric data may be used for different purposes from those for which it was collected, including the unlawful tracking and monitoring of individuals. Given those risks, particular attention should be paid to questions of necessity and proportionality in the collection of biometric data. Against that background, it is worrisome that some States are embarking on vast biometric data-base projects without having adequate legal and procedural safeguards in place”.

The report recommends that States, inter alia “Ensure that data-intensive systems, including those involving the collection and retention of biometric data, are only deployed when States can demonstrate that they are necessary and proportionate to achieve a legitimate aim”.

It should be noted that the creation of a national biometric identification system is not, in itself, a legitimate aim for the collection of biometric data on scale. Such an identification system cannot be seen as a legitimate aim in itself.

Modern standards of data protection recognise the need to afford extra protection to biometric data.[1]

Many national laws, however, do not mention biometric data, and do not explicitly characterise biometric data as personal and sensitive data.

In practice, applying the principles of necessity and proportionality mean adopting the least intrusive means to achieve the relevant legitimate aim, in this context: the prevention and investigation of acts of terrorism. It also requires that any measure is accompanied by legal, procedural and technical safeguards to minimise the privacy’s interference.

Necessity and proportionality assessments should play a significant role in relation to decisions to create centralised databases, and in the rules that govern retention and access of biometric data.

Is it a centralised database of biometric data?

Governments and industry often support the creation of large centralised databases containing biometrics information. For example, the Aadhaar biometric identification system in India contains the fingerprints, iris scans, and photographs of over 1.1 billion people.

However, large centralised databases of biometric data have often failed to pass a proportionality assessment under human rights law. As a London School of Economics report on the UK Identity Card stated, “There is an enormous difference in the implications for the human right to privacy between this type of system, and one where a biometric is only stored locally in a smartcard”.

That is because there is a significant difference between storing biometric data locally than storing them in a centralised database, with the latter being significantly more intrusive to privacy. For example, biometric passports that store the biometric details of an individual on a chip in the passport, rather than a centralised database, are used in the UK. Storing biometric data locally allows for the use of biometrics for authentication (to be able to be sure that the person with the document is who they claim to be) but prevents its use from the far more intrusive process of identification (finding the identity of a person when it is not known).

Data protection authorities in Europe have raised grave reservations about the proportionality of proposals that would lead to the storage of biometric data on all non-nationals applying for a visa or residence permit in centralised databases for the purpose of carrying out subsequent checks on illegal immigrants (particularly those without documents).

Commenting on the Kuwaiti Law No. 78 (2015) on counter-terrorism, which requires nationwide compulsory DNA testing and the creation of a database under the control of the Minister of the Interior, the UN Human Rights Committee found that it imposes unnecessary and disproportionate restrictions on the right to privacy.

It is worth noting that the recommendations of the UN Human Rights Committee to the Kuwaiti governments included amending the law “with a view to limiting DNA collection to individuals suspected of having committed serious crimes and on the basis of a court decision; (b) ensure that individuals can challenge in court the lawfulness of a request for the collection of DNA samples; (c) set a time limit after which DNA samples are removed from the database; and (d) establish an oversight mechanism to monitor the collection and use of DNA samples, prevent abuses and ensure that individuals have access to effective remedies.”

How long is the data retained for?

Under international law, indiscriminate retention of personal data, including biometric data, is never proportionate and necessary, even if when governments seek to justify it on grounds of protection of national security, including threat of terrorism acts.

In the case of S v. Marper, the European Court of Human Rights found there had been a violation of the right to privacy by the UK, as a result of the blanket and indiscriminate nature of the powers of retention of the fingerprints, cellular samples and DNA profiles of persons suspected but not convicted of offences which failed to strike a fair balance between the competing public and private interests.[2]

One of the first things to establish is the time-frame during which the system designed to be used, and to ensure that there exists a publicly accessible data retention policy.

What other purposes could the data be used for?

Strictly linked to the necessity and proportionality assessment are the concerns related to the repurposing of biometric databases (often described as mission creep.) The mere existence of biometric data in a centralised identification system could lead to the development of new justifications for its use and seeking to broaden the authorities with access to it.

Often in the name of national security and counter-terrorism, states have sought to allow law enforcement and security agencies access to databases designed for purposes unrelated to counter-terrorism and prevention or investigation of crimes.

For example, in 2004, the European Asylum Dactyloscopy Database (“EURODAC”) was established to facilitate the application of the Dublin Regulation, which determines the EU Member State responsible for examining an asylum application. In 2009, EU Member States proceeded to decide that EURODAC should made accessible for law enforcement purposes in order to fight terrorism, a purpose for which the data processed was never intended, as noted by the European Data Protection Supervisor (“EDPS”) in its Opinion on the matter. The EDPS’s opinion also raised that the use of EURODAC for law enforcement purposes, and specifically for terrorism, means that a particular vulnerable group in society, namely applicants for asylum, could be exposed to further risks of stigmatisation, even though they are “not suspected of any crime” and “are in need of higher protection because they flee from persecution.”

There have been some cases where privacy concerns about access to centralised databases by the police or security services have led to judgments limiting such access. For example, in India, Section 33(2) of the Aadhaar Act allowed, for the purpose of national security, access to the Aadhaar database (including biometrics) if authorised by an intelligence officer of Joint Secretary or above. The Aadhaar judgement ensured that anybody who’s data was accessed in this way would be subject to a hearing.

A key question which should be raised is:

  • Does the design of the system meet the goals established in the purpose of the system,  or  does  its  design  exceed  those  purposes  to  create  additional  privacy risks?

Could the data be connected to other databases?

Similar concerns apply in relation to the trend by governments to develop ‘interconnectivity’ of different biometric databases. This trend is generally noted as positive by security actors. For example, the United Nations Compendium of recommended practices for the responsible use and sharing of biometrics in counter-terrorism talks positively of this aggregation of “the aggregation of disparate, single-mode databases and has evolved, in some countries and regions, into state-of-the-art, replacement networks that feature interconnected multi-modal databases designed to service a range of business needs across law enforcement, border management and other government functions at both a national and international level.”

However, any such interoperability needs to consider the limits that human rights law, and in particular data protection standards, impose to such measures. Limits that are necessary to prevent the mission creep and the accompanying human rights abuses.

Is it secure?

Unlike a password, an individual’s biometrics cannot be easily changed.  As a result rectification of the unauthorised access to biometric data are either impossible or incurring a significant cost.

Biometric data breaches seriously affect individuals in a number of ways, whether identity theft or fraud, financial loss or other damage. The EU Fundamental Rights Agency found in relation to a central national database “due to its scale and the sensitive nature of the data which would be stored, the consequences of any data breach could seriously harm a potentially very large number of individuals. If such information ever falls into the wrong hands, the database could become a dangerous tool against fundamental rights.”

In January 2018, it was reported that access to the entire Aadhaar database – including the names, addresses, phone numbers, and photographs, but not fingerprint or iris scan data – was being sold for 500 rupees on a WhatsApp group.

A breach of the US government’s Office of Personnel Management – the agency that handles the security clearances of civilian workers – was announced in 2015. The records of 21.5 million people were stolen, including the fingerprints of 5.6 million federal employees. A security expert said that this risked undercover operatives: "A secret agent's name might be different. But they'll know who you are because your fingerprint is there. You'll be outed immediately." The breach of one of the most sensitive biometric databases maintained by one of the most well-resourced and security-focused governments in the world with advanced access control protocols raises urgent questions about the ability of less well-resourced actors to appropriately defend against such breaches.

The risks associated with unauthorised access to biometric data also threaten the effectiveness of the counter-terrorism measures, particularly when such breaches are not promptly reported and notified to independent oversight bodies and individuals concerned.

While regular risk assessments of the end-to-end process of the biometric applications should be fundamental, there is also a need to conduct risk assessment prior to the implementation and application of identification systems based on biometric data and embed a privacy and security in the design of such systems.

Questions which should be asked include:

  • Who holds the responsibility and so the obligations for the system (design, deployment, management, auditing, maintenance, etc.)?
  • How are new systems operating in relations to existing non-humanitarian identity systems, i.e. national ID systems?
  • What are the unintended consequences in the short-, mid-and long-term?
  • Does the entity deploying the system have the expertise, tools and resources to undertake a well-informed risk assessment and to mitigate the risks identified?
  • What minimum IT security measures should be implemented, and is there financial and technical support for such measures?
  • What IT security measures will be provided in the future, for example if the providing company ceases support?
  • What will happen if any data in the database is breached by various actors?

Who will the data be shared with?

Privacy International recognises the importance and benefit of intelligence sharing in the context of preventing and investigating terrorism or other genuine, serious threats to national security. However, unregulated, unfettered and unwarranted intelligence sharing poses substantive risks to human rights and to the democratic rule of law.

Sharing of personal data, such as biometric data, across jurisdiction can put people at high risk, and therefore must be regulated. Such sharing is within the purview of international human rights law, and in particular data protection. While such sharing is often said to be in line with existing obligations under international human rights law, there is little actual detail or guidance on how this can be achieved.

The UN Compendium identifies some principles that should regulate such sharing of biometric data, focusing on the necessity of a clear legal framework and the limits on the use of such data. However, its recommended practice clearly favour the maximum sharing of biometric data across borders.

Privacy International has published detailed recommendations on what adequate safeguards and oversight over intelligence sharing should look like, which can be accessed here.

Biometric data, because of its sensitivity requires even stricter limitations and safeguards to ensure its sharing across jurisdiction comply with international human rights law. Therefore, states must introduce some additional minimum safeguards in order to ensure their intelligence sharing laws and practices are compliant with applicable international human law (notably Article 12 of the Universal Declaration of Human Rights and Article 17 of the International Covenant on Civil and Political Rights.)

 

 

[1] The Council of Europe Modernised Convention for the Protection of Individuals with Regard to the Processing of Personal Data (“Convention 108 +”). Article 6, provides that biometric data uniquely identifying a person shall only be allowed where appropriate safeguards are enshrined in law, complementing those of Convention 108 +. The European General Data Protection law (“GDPR”). Article 9, prohibits the processing of biometric data for the purpose of uniquely identifying a natural person subject to limited exceptions. The Brazilian General Data Protection Law (“LGPD”), Federal Law no. 13,709/2018, Article 5 also provides special protections for biometric data.

[2] The Court emphasised: “...The need for such safeguards is all the greater where the protection of personal data undergoing automatic processing is concerned, not least when such data are used for police purposes. The domestic law should notably ensure that such data are relevant and not excessive in relation to the purposes for which they are stored; and preserved in a form which permits identification of the data subjects for no longer than is required for the purpose for which those data are stored ... The domestic law must also afford adequate guarantees that retained personal data was efficiently protected from misuse and abuse ...The above considerations are especially valid as regards the protection of special categories of more sensitive data ...and more particularly of DNA information, which contains the person's genetic make-up of great importance to both the person concerned and his or her family” (S. and Marper v. The United Kingdom, App. Nos. 30562/04 and 30566/04, European Court of Human Rights, Judgment (4 December 2008), para 103)