Artificial Intelligence and its applications are a part of everyday life: from social media newsfeeds to mediating traffic flow in cities, from autonomous cars to connected consumer devices like smart assistants, spam filters, voice recognition systems, and search engines.

AI has the potential to revolutionise societies in many ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights, including the right to privacy.

AI-driven consumer products and autonomous systems are frequently equipped with sensors that generate and collect vast amounts of data without the knowledge or consent of those in their proximity. AI methods are being used to identify people who wish to remain anonymous; to infer and generate sensitive information about people from non-sensitive data; to profile people based upon population-scale data; and to make consequential decisions using this data, some of which profoundly affect people’s lives.

What is the problem

The range and diversity of AI applications means that the problems and risks are manifold. These include:

  • Re-identification and de-anonymisation: AI applications can be used to identify and thereby track individuals across different devices, in their homes, at work, and in public spaces. For example, while personal data is routinely (pseudo-)anonymised within datasets, AI can be employed to de-anonymise this data. Facial recognition is another means by which individuals can be tracked and identified, which has the potential to transform expectations of anonymity in public space.
  • Discrimination, unfairness, inaccuracies, bias: AI-driven identification, profiling, and automated decision-making may also lead to unfair, discriminatory, or biased outcomes. People can be misclassified, misidentified, or judged negatively, and such errors or biases may disproportionately affect certain groups of people.
  • Opacity and secrecy of profiling: Some applications of AI can be opaque to individuals, regulators, or even the designers of the system themselves, making it difficult to challenge or interrogate outcomes. While there are technical solutions to improving the interpretability and/or the ability to audit of some systems for different stakeholders, a key challenge remains where this is not possible, and the outcome has significant impacts on people’s lives.
  • Data exploitation: People are often unable to fully understand what kinds and how much data their devices, networks, and platforms generate, process, or share. As we bring smart and connected devices into our homes, workplaces, public spaces, and even bodies, the need to enforce limits on data exploitation becomes increasingly pressing. In this landscape, uses of AI for purposes like profiling, or to track and identify people across devices and even in public spaces, amplify this asymmetry.

What is the solution

The development, use, research, and development of AI must be subject to the minimum requirement of respecting, promoting, and protecting international human rights standards.

Different types of AI and different domains of application raise specific ethical and regulatory human rights issues. In order to ensure that they protect individuals from the risks posed by AI as well as address the potential collective and societal harms, existing laws must be reviewed, and if necessary strengthened, to address the effects of new and emerging threats to rights, including establishing clear limits, safeguards and oversight and accountability mechanisms.

What PI is doing

Many areas of PI’s work touch on AI applications in different contexts, including in advertisingwelfare and migration.

Across these areas and more we are working with policy makers, regulators and other civil society organisations to seek to ensure that there are adequate safeguards accompanied by oversight and accountability mechanisms.

Here are some examples:

  • We investigate the creeping use of facial recognition technology across the world; work with community groups, activists, and others to raise awareness about the technology and what they can do about it; and push national and international bodies to listen to peoples’ concerns and take steps to protect rights.
  • We have launched a legal challenge against UK intelligence agency M15, over their handling of vast troves of personal data in an opaque ‘technical environment’.
  • We scrutinise invasive and often hidden profiling practices, whether by data brokers, on mental health websites or by law enforcementand how these can be used to target people. We challenge these practices as well as providing recommendations to policy-makers.
  • We demand transparency on the use AI applications, whether by companies such as Palantir in response to the Covid-19 crisis, by those developing digital identity ‘solutions’, by political parties in their digital campaigns or public authorities, such as the UK National Health Service as part of their contract with Amazon.
News & Analysis

Privacy International (PI) and Liberty have filed on Friday, 31 January 2020, a complaint with the Investigatory Powers Tribunal (IPT), the judicial body that oversees the intelligence agencies, against MI5 in relation to how they handle vast troves of personal data.

Advocacy
On November 1, 2019, we submitted evidence to an inquiry carried out by the Scottish Parliament into the use of Facial Recognition Technology (FRT) for policing purposes. In our submissions, we noted that the rapid advances in the field of artificial intelligence and machine learning, and the
The US Department of Homeland Security awarded a $113 million contract to General Dynamics to carry out the Visa Lifecycle Vetting Initiative (VLVI), a renamed version of the Extreme Vetting Initiative and part of a larger effort called the National Vetting Enterprise. In May 2018, public outrage
VeriPol, a system developed at the UK's Cardiff University, analyses the wording of victim statements in order to help police identify fake reports. By January 2019, VeriPol was in use by Spanish police, who said it helped them identify 64 false reports in one week and was successful in more than 80
In October 2018, the Singapore-based startup LenddoEFL was one of a group of microfinance startups aimed at the developing world that used non-traditional types of data such as behavioural traits and smartphone habits for credit scoring. Lenddo's algorithm uses numerous data points, including the
In November 2018, tests began of the €4.5 million iBorderCtrl project, which saw AI-powered lie detectors installed at airports in Hungary, Latvia, and Greece to question passengers travelling from outside the EU. The AI questioner was set to ask each passenger to confirm their name, age, and date
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs
Advocacy
During its 98th session, from 23 April to 10 May 2019, the UN Committee on the Elimination of Racial Discrimination (CERD) initiated the drafting process of general recommendation n° 36 on preventing and combatting racial profiling. As part of this process, CERD invited stakeholders, including
16 Nov 2017
In 2017, US Immigration & Customs Enforcement (ICE) announced that it would seek to use artificial intelligence to automatically evaluate the probability of a prospective immigrant “becoming a positively contributing member of society.” In a letter to acting Department of Homeland Security Secretary
News & Analysis
Image source creative commons. The below piece was o riginally posted on the UNRISD site here. How AI is affecting our human rights Artificial Intelligence (AI) is part of our daily lives. Its many applications inform almost all sectors of society: from the way we interact on social media, to the
News & Analysis
This piece originally appeared here. Creative Commons Photo Credit: Source Tech competition is being used to push a dangerous corporate agenda. High-tech industries have become the new battlefield as the United States and China clash over tariffs and trade deficits. It’s a new truism that the two
Report
Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically
23 May 2016
Computer programs that perform risk assessments of crime suspects are increasingly common in American courtrooms, and are used at every stage of the criminal justice systems to determine who may be set free or granted parole, and the size of the bond they must pay. By 2016, the results of these
03 May 2016
In 2012, London Royal Free, Barnet, and Chase Farm hospitals agreed to provide Google's DeepMind subsidiary with access to an estimated 1.6 million NHS patient records, including full names and medical histories. The company claimed the information, which would remain encrypted so that employees
Press release
Full report is available here ARTICLE 19 and Privacy International’s report provides an overview of the impact of AI technologies on freedom of expression and privacy. It calls for further study and monitoring of how AI tools impact human rights. Specifically, we call on states and companies to