The rise of the Surveillance Databases
Surveillance databases are on the rise all around us and with them comes a wider array of issues. Here we begin to unpack these concerns and discuss some of the prominent global drivers of this trend.
1. What is the issue?
Governments and international organisations are developing and accessing databases to pursue a range of vague and ever-expanding aims, from countering terrorism and investigating crimes to border management and migration control.
These databases hold personal, including biometric, data of millions if not billions of people, and such data is processed by technologies, including Artificial Intelligence (AI), to surveil, profile, predict future behaviour, and ultimately make decisions affecting the lives of individuals and communities. Invariably these databases expand in scope and aims. This is not an accident: their design (such as modularity and interoperability) invites repurposing and mission creep.
Private companies are at the forefront of the developments of these surveillance databases, offering governments data management and analytics tools and often running these on behalf of public authorities, in secret and without accountability.
Intergovernmental organisations, such as the United Nations and the European Union, have been supporting and facilitating the adoption of these surveillance measures in the name of countering terrorism and border security, notably by providing financial and technical support for states to develop these databases, as well as developing their own, often disregarding their own human rights and due diligence policies.
PI and our partners have long documented the expansion of surveillance databases and have identified some common and recurring issues across the wide range of their uses. We have also long advocated for legal, policy, and technical measures, rooted in international human rights law, to mitigate the risks of human rights violations associated with the use of such databases.
2. What are the key common concerns?
We are concerned about the ever expanding number of these surveillance databases without sufficiently interrogating the reasons for and necessity of introducing them. We are concerned further that these databases are introduced in the absence of appropriate regulatory frameworks, independent oversight and access to effective remedies.
2.1 Mass data harvesting
The quantity and type of personal data held in these surveillance databases keep expanding. Mass collection and processing of personal data are invariably disproportionate and untargeted, building on the capabilities of governments to surveil individuals indiscriminately. The vast majority of individuals whose personal data is caught by these databases are unlikely to be threats to national security or being suspected of having committed serious crimes. For example, the UK Security and Intelligence Agencies have been building massive, comprehensive datasets of information on each and every individual. They have been collecting and combining information such as passport information, social media activities, travel data, the finance-related activity of individuals from multiple sources on unclear legal bases and with minimal oversight.
Of particular concerns is the expanding capacity to collect biometric data, most commonly in the form of fingerprints and facial images, but increasingly including iris scans and others, feature prominently.
The use of biometric data presents a unique set of human rights risks. These are neatly summarised in the UN High Commissioner for Human Rights 2022 report on the right to privacy in the digital age, as biometric data is particularly sensitive, as it is by definition inseparably linked to a particular person and that person’s life, and has the potential to be gravely abused. For example, identity theft on the basis of biometrics is extremely difficult to remedy and may seriously affect an individual’s rights. Moreover, biometric data may be used for different purposes than those for which it was collected, including the unlawful tracking and monitoring of individuals. Given those risks, particular attention should be paid to questions of necessity and proportionality in the collection of biometric data. Against that background, it is worrisome that some states are embarking on vast biometric data-base projects without having adequate legal and procedural safeguards in place.
Large centralised databases of biometric data have often failed to pass a proportionality assessment under human rights law and data protection standards. That is because there is a significant difference between storing biometric data locally than storing them in a centralised database, with the latter being significantly more intrusive to privacy, as noted, for example, by the European Data Protection Board’s Opinion 11/2024 on the use of facial recognition to streamline airport passengers’ flow.
While human rights experts and courts as well as data protection authorities continue to raise the alarm at the interference with the right to privacy and the scale of such processing of sensitive personal data, governments and industry tend to pay lip service to these concerns, agreeing in principle to the need to uphold human rights while failing in practice to conduct the human rights due diligence, including human rights and data protection impact assessments necessary to mitigate these risks.
2.2 Increase of security breaches risks
Surveillance databases have been subject to abuse and breaches. Abuses when the data is collected, stored and accessed without proper safeguards. For example, the UK intelligence agency MI5 admitted it stored the public’s data improperly when it had no legal right to do so, and that it failed to disclose this to the UK Home Office and oversight bodies.
Because of the volume and type of data they contain, these surveillance databases are a sure target of malicious actors.
Lack of adequate security measures of databases containing personal information has resulted in unauthorised data access of millions of people in countries across the world. For example, in October 2021 the database of the Argentina’s agency responsible for issuing citizens ID cards was breached and an anonymous attacker claimed to have accessed the National Registry of People (RENAPER)’s database and obtained private information about 45 million Argentinian citizens, including photos, full names, and home addresses. This information was put on sale online. In August 2023, it emerged that a hostile cyber-attack targeting the full electoral register in the UK had resulted in the unauthorised access to the data of 40 million voters, including their names and addresses.
Data breaches seriously affect individuals in several ways, whether identity theft or fraud, financial loss or other damage. The EU Fundamental Rights Agency found in relation to a central national database “due to its scale and the sensitive nature of the data which would be stored, the consequences of any data breach could seriously harm a potentially very large number of individuals. If such information ever falls into the wrong hands, the database could become a dangerous tool against fundamental rights.”
2.3 Mission creep and interoperability
Often in the name of combating terrorism, states have sought to allow law enforcement and security agencies access to databases designed for purposes unrelated to counter-terrorism or investigation of serious crimes, a phenomenon known as ‘mission creep’, where a database is used for purposes other than its original intended purpose. This is particularly so in relation to databases holding information related to travellers and people on the move crossing international borders, as well as asylum seekers and beneficiaries of international protection.
In Europe, for example, one key ongoing concern is related to the development of biometric data collection systems that are modelled off the European Asylum Dactyloscopy Database (EURODAC) system, allowing for seamless interoperability in the future. In 2004, the EURODAC was established to facilitate the application of the Dublin Regulation, which determines the EU Member State responsible for examining an asylum application. EU Member States then proceeded to make EURODAC accessible for law enforcement purposes in order to fight terrorism, a purpose for which the data processed was never intended, as noted by the European Data Protection Supervisor in its Opinion on the matter. The European Data Protection Supervisor (EDPS)’s opinion also made the point that the use of EURODAC for law enforcement purposes, and specifically for terrorism, means that a particular vulnerable group in society, namely applicants for asylum, could be exposed to further risks of stigmatisation, even though they are “not suspected of any crime” and “are in need of higher protection because they flee from persecution.”
Despite these warnings, the EU is going further in its quest to make databases interoperable. As noted in this report funded by PI, the EU has an ambitious and far-reaching plan which seeks to promote the integration and interconnection of various EU databases, in turn connecting those databases with other data sources maintained by Europol, Interpol and national institutions in third countries, a building block for more comprehensive systems of surveillance and control that will extend further into non-EU states.
Interoperability, interconnectivity, modular design; these neutral, reassuringly sounding terms are a sure sign of mission creep and raise red flags about the risk of processing personal data for incompatible purposes.
A telling example of this concern is the planned integration of UN goTravel software, developed under the auspices of the UN counter-terrorism strategy, with the International Organisation for Migration’s (IOM) Migration Information and Data Analysis System (MIDAS). The IOM describes MIDAS as a fully customisable Border Management information system, capable of capturing, inter alia, biographical data and biometric data of travellers as well as Advanced Passenger Information (API) data. Its intended purpose is extremely wide and vague, covering border security as well as managing/monitoring migration flows.
States’ and international organisations’ cavalier attitude towards purpose limitation, one of the central tenets of data protection, is concerning because the necessity and proportionality considerations that may allow the use of certain privacy intrusive surveillance measures for counter-terrorism purposes are entirely different from those applicable to other legitimate aims, such as monitoring migration flows.
2.4 Regulatory void
A further key concern when it comes to surveillance databases is that they are often rolled out in the absence of adequate legislation or regulation governing their use and safeguarding the human rights of the data subjects whose information they contain.
For example, it is well established that processing biometric data needs to be grounded on adequate legislation, limiting the interference with the right to privacy to what is strictly and demonstrably necessary to achieve a legitimate aim. Domestic law must be accessible to the public and sufficiently clear and precise to enable persons to foresee its application and the extent of the intrusion with someone’s privacy. However, most states continue to have no or inadequate legislation in place. In 2018, the UN Security Council concurred that “biometric technology creates particular challenges because of the gap created by technological innovation and the introduction of legislation regulating such technologies.”
Since this assessment, the landscape has not improved. The UN-established expert body, the Counter-Terrorism Committee Executive Directorate (CTED), shows in its own assessments that many states “still lack sufficient legal and regulatory frameworks, data management and processing protocols, risk and impact assessment practices, and rigorous access controls and records for technology-based systems”. Many national data protection laws do not even mention biometric data, or do not explicitly characterise biometric data as sensitive personal data. Additionally, many of these laws contain significant broad exemptions. Many do not apply to processing of data by intelligence agencies and law enforcement, and even when they do, they contain wide reaching exemptions for purposes such national security and prevention or investigation of crime.
2.5 Lack of human rights impact assessments
PI has found that surveillance databases are implemented out without prior human rights and data protection impact assessments. For example, the US Department of Defence biometric programme in Afghanistan and Iraq under the guise of preventing terrorist acts was developed and implemented without prior assessment of its human rights impact and without the safeguards necessary to prevent its abuse.
As noted by the Council of Europe Commissioner for Human Rights in their Positions on Counter-Terrorism and Human Rights Protection (5 June 2015) “an independent assessment of the use and impact of individual information databases must be carried out in order to ensure that they are necessary and proportionate.” The lack of human rights impact assessments have also been identified in the roll out of nation wide digital ID systems, which increasingly require the processing of vast amount of personal data. Various judgments, including recent Kenya ruling issued by courts around the world, being asked to judge on the implications of identity systems particular on human rights, have taken the position human rights impact assessment and, specifically, privacy assessment, is a pre-condition and key element of the protection framework which should be in place prior to the deployment if an identity system.
3. Exporting surveillance databases ‘solutions’ – the role of international organisations
Some states have bilaterally supported other states to develop their surveillance databases, often justifying such support under the guise of fighting terrorism.
For example, the US National Strategy to Combat Terrorist Travel has a very strong focus on advancing the use of biometric technologies to detect and prevent suspected terrorists from travelling into the US. The strategy envisages supporting foreign governments to deploy biometric technologies. It foresees an approach whereby the US government support the development of these technologies together with the establishment of information exchange. In particular, the US Department of Defense (DOD) has funded a biometric system for the Iraqi and the Afghan national security forces. The DOD’s biometric programme was developed and implemented without prior assessment of its human rights impact and without the safeguards necessary to prevent its abuse.
Beyond bilateral support, some international organisations, notably the EU and the UN, are providing assistance and funding to states to develop their surveillance databases. PI’s research suggest that this assistance is almost void of due diligence including human rights impact assessments and data protection impact assessments.
3.1 The EU
The EU is both expanding its internal surveillance databases and exporting the surveillance model to third countries, often to serve the EU’s migration control objectives.
Firstly, some of their internal databases are aimed to integrate and support the tracking of individuals before they enter the EU. The development of an EU “travel intelligence” architecture is notably propelled by Europol. More information on the travel intelligence plans can be found in the final report of the Europol-Frontex Future Group on Travel Intelligence, which was made public by Statewatch in May 2022. This outlined the possibility of a “European System for Traveller Screening” that would incorporate data from as many sources as possible to generate information on an individual across the “EU Border and Travel Continuum” – from an individual planning to travel to the EU, crossing the border, staying within the EU, and then departing.
Secondly, the EU and some European member states are spending billions of Euros transferring surveillance and border control capabilities to foreign countries to ensure they stop people migrating to their countries.
Under the EU Trust Fund for Africa, for example, which is being used to manage migration from Africa to Europe, millions have been allocated to countries to provide them with digital tools to collect data from devices and build mass-scale biometric ID systems. For example, the EU’s Trust Fund for Africa provided €28 million to develop a universal nationwide biometric ID system in Senegal by funding a central biometric identity database, the enrolment of citizens, and the interior ministry in charge of the system. Other EU funds have been used to train police in North Africa on wiretapping, monitoring social media users, and decrypting intercepted internet content. In the Balkans, similar funds have been allocated to provide authorities with wiretapping equipment and to build biometric ID systems.
Following a complaint by Privacy International and five other CSOs on the support of EU projects across Africa aimed at bolstering surveillance and tracking powers, a European Ombudsman’s inquiry found that “the Commission was not able to demonstrate that the measures in place ensured a coherent and structured approach to assessing the human rights impacts”. It recommended that the European Commission now require that an “assessment of the potential human rights impact of projects be presented together with corresponding mitigation measures.” The lack of such protections, which the Ombudsman called a “serious shortcoming”, poses a clear risk that these surveillance transfer might cause serious violations of or interferences with other fundamental rights.
3.2 The United Nations
The UN is becoming an increasingly important actor in promoting the adoption of surveillance databases by UN Member States as part of the UN counter-terrorism strategy.
In particular, responding to a series of UN Security Council resolutions calling on states “to collect and analyze Advance Passenger Information (API) and develop the ability to collect, process and analyse […] Passenger Name Record (PNR) data”, the UN Countering Terrorist Travel Programme (CTTP) plays a key role promoting the surveillance of travellers. In fact, it is unique within the UN as it offers Member States a purpose made software, goTravel, that enables government authorities to process travellers’ personal data. PI believes that the UN CTTP has not put in place the necessary tools to support Member States processing of travellers’ data in accordance with international human rights law and it has not demonstrated its compliance with the UN Due Diligence policy.
Similarly, leading UN entities implementing the UN counter-terrorism policies, are promoting the adoption of biometrics systems at borders. For example, it is telling how the UN Compendium of Recommended Practices For the Responsible Use & Sharing of Biometrics in Counter Terrorism talks positively of integration of all national law enforcement biometrics databases and of the “interconnected multi-modal databases designed to service a range of business needs across law enforcement, border management and other government functions at both a national and international level.” In fact, the UN Compendium suggests the integration of biometric databases as a solution to predict terrorist activities: “The traditional biometric databases […] were designed to be reactive and pose investigative questions based on identity and current or past activity such as “Are you known to us, who are your associates and what have you done?” Integrated biometric databases can obviously answer the same questions but they may also be used pro-actively to infer and predict potential future actions and associations i.e. “What are you and your associates planning or likely to do and when, where?” A comprehensive and careful analysis of all outputs across the network is therefore essential and can be a critical success factor in evaluating and anticipating terrorist activity when coupled with other intelligence.”
While there is some recognition of the need to develop legal framework in order to allow such interoperability of database, there is no reference to the limits that human rights law, and in particular data protection standards, impose to such measures. Limits that are necessary to prevent the mission creep and the accompanying human rights abuses.
3.3 Interpol
The International Criminal Police Organisation (INTERPOL) is an actor promoting the adoption of data-driven policing methods around the world, including the development of sensitive databases that could be used for surveillance purposes and the tracking of populations across borders.
INTERPOL has 196 member countries, making it the world’s largest police organisation, and has a range of 19 databases containing personal data such as fingerprints and facial images.
PI has researched INTERPOL’s EU-funded West African Police Information System (WAPIS) programme - a project whose objectives include counter terrorism and which aims to facilitate the establishment of digitised national police databases in countries across West Africa, whilst ensuring these databases have the capability to connect with each other at a regional level as well as at an international level via INTERPOL’s own global police communications system named i-24/7, through which a host of other databases can be accessed. In addition, the programme facilitates the use of biometric technology and biometric data, and constitutes part of the EU’s migration management efforts.
Our report found that the lack of transparency surrounding INTERPOL’s support of increasingly interoperable databases and which actors have access to them is a point of concern, due to the risk that such data may be exploited or misused. It further found that by facilitating the processing of sensitive personal data under WAPIS without either effective legal frameworks and/or functioning independent data protection authorities, INTERPOL fails to uphold its responsibilities to protect human rights and mitigate risks of data protection abuses.
More broadly, INTERPOL promote the use of biometric personal data on a more global scale, particularly at border locations through their Biometric Hub, and hold their own biometric databases.
4. The role of companies
Private companies are major proponents of these surveillance databases.
For example, French company Civipol (or Civi.Pol Conseil) is heavily involved in the development of biometric identity systems in West Africa. In Senegal, it is the agency which conducted the entire diagnostic evaluation and management plan formulation process, and will now also be involved in implementing it together with the Belgian development agency, ENABEL. In Côte d’Ivoire, it will also be implementing the project by providing technical assistance.
The company Palantir sells data integration and analytics platforms. Its two primary products, Gotham and Foundry, aggregate the disconnected information storage systems that house an organisation’s disparate information sources (e.g. log files, spreadsheets, tables, etc.) into ‘a single, coherent data asset. The company reportedly built and helped to deploy a data analysis platform used by US immigration authorities
These Public-Private Partnerships (PPPs,) whereby private companies are contracted by states to deliver services traditionally provided by governments, include the development of surveillance databases without the safeguards required to ensure human rights are not abused. PPPs are taking on a new form, diverging from traditional public procurement relationships, whereby the state may be developing new systems or processes entirely reliant on the services of one company, and the company may be receiving access to data for use in developing its own services. Beyond a simple “one-off” commercial relationship, these partnerships are often predicated on ever more private access to data – often circumventing public procurement rules and impeding on fundamental rights in the process.
The privatisation of public responsibilities can be deeply problematic if deployed without the safeguards required to ensure human rights are not quietly abused. This is particularly true when the systems deployed are used for surveillance and mass processing of personal data. As our research has consistently shown, common concerns include lack of transparency, accountability, independent oversight and access to remedies and redress.
5. What is PI doing about it?
- Research and expose the development of surveillance databases around the world.
- File legal complaints to challenge the unregulated development and use of surveillance databases.
- Advocate with states to be transparent and to put in place safeguards to respect and protect human rights.
- Call on international organisations to uphold the rule of law and act with due diligence to avoid supporting surveillance systems that abuse human rights.
- Provide resources - including on freedom of information and data protection to help other organisations unmask the abusive practices surrounding the rolling out of surveillance databases.