Artificial Intelligence and its applications are a part of everyday life: from social media newsfeeds to mediating traffic flow in cities, from autonomous cars to connected consumer devices like smart assistants, spam filters, voice recognition systems, and search engines.

AI has the potential to revolutionise societies in many ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights, including the right to privacy.

AI-driven consumer products and autonomous systems are frequently equipped with sensors that generate and collect vast amounts of data without the knowledge or consent of those in their proximity. AI methods are being used to identify people who wish to remain anonymous; to infer and generate sensitive information about people from non-sensitive data; to profile people based upon population-scale data; and to make consequential decisions using this data, some of which profoundly affect people’s lives.

What is the problem

The range and diversity of AI applications means that the problems and risks are manifold. These include:

  • Re-identification and de-anonymisation: AI applications can be used to identify and thereby track individuals across different devices, in their homes, at work, and in public spaces. For example, while personal data is routinely (pseudo-)anonymised within datasets, AI can be employed to de-anonymise this data. Facial recognition is another means by which individuals can be tracked and identified, which has the potential to transform expectations of anonymity in public space.
  • Discrimination, unfairness, inaccuracies, bias: AI-driven identification, profiling, and automated decision-making may also lead to unfair, discriminatory, or biased outcomes. People can be misclassified, misidentified, or judged negatively, and such errors or biases may disproportionately affect certain groups of people.
  • Opacity and secrecy of profiling: Some applications of AI can be opaque to individuals, regulators, or even the designers of the system themselves, making it difficult to challenge or interrogate outcomes. While there are technical solutions to improving the interpretability and/or the ability to audit of some systems for different stakeholders, a key challenge remains where this is not possible, and the outcome has significant impacts on people’s lives.
  • Data exploitation: People are often unable to fully understand what kinds and how much data their devices, networks, and platforms generate, process, or share. As we bring smart and connected devices into our homes, workplaces, public spaces, and even bodies, the need to enforce limits on data exploitation becomes increasingly pressing. In this landscape, uses of AI for purposes like profiling, or to track and identify people across devices and even in public spaces, amplify this asymmetry.

What is the solution

The development, use, research, and development of AI must be subject to the minimum requirement of respecting, promoting, and protecting international human rights standards.

Different types of AI and different domains of application raise specific ethical and regulatory human rights issues. In order to ensure that they protect individuals from the risks posed by AI as well as address the potential collective and societal harms, existing laws must be reviewed, and if necessary strengthened, to address the effects of new and emerging threats to rights, including establishing clear limits, safeguards and oversight and accountability mechanisms.

What PI is doing

Many areas of PI’s work touch on AI applications in different contexts, including in advertisingwelfare and migration.

Across these areas and more we are working with policy makers, regulators and other civil society organisations to seek to ensure that there are adequate safeguards accompanied by oversight and accountability mechanisms.

Here are some examples:

  • We investigate the creeping use of facial recognition technology across the world; work with community groups, activists, and others to raise awareness about the technology and what they can do about it; and push national and international bodies to listen to peoples’ concerns and take steps to protect rights.
  • We have launched a legal challenge against UK intelligence agency M15, over their handling of vast troves of personal data in an opaque ‘technical environment’.
  • We scrutinise invasive and often hidden profiling practices, whether by data brokers, on mental health websites or by law enforcementand how these can be used to target people. We challenge these practices as well as providing recommendations to policy-makers.
  • We demand transparency on the use AI applications, whether by companies such as Palantir in response to the Covid-19 crisis, by those developing digital identity ‘solutions’, by political parties in their digital campaigns or public authorities, such as the UK National Health Service as part of their contract with Amazon.
News & Analysis

Following PI’s submissions before the UK Information Commissioner’s Office (ICO), as well as other European regulators, the ICO has announced its provisional intent to fine facial recognition company Clearview AI. But this is more than just a regulatory action.

News & Analysis

The World Health Organisation tasked Privacy International with reviewing their guidance on Ethics and Governance of Artificial Intelligence for Health. Here is our analysis of the final report.

Advocacy

Privacy International submitted its input to the forthcoming report by the UN High Commissioner for Human Rights (HCHR) on the right to privacy and artificial intelligence (AI.)

In our submission we identify key concerns about AI applications and the right to privacy. In particular we highlight concerns about facial recognition technologies and the use of AI for social media monitoring (SOCMINT). We document sectors where the use of AI applications have negatively affected the most vulnerable groups in society, such as the use of AI in welfare and in immigration and border control.

The briefing also argues for the adoption of adequate and effective laws accompanied by safeguards to ensure AI applications comply with human rights.

18 Jun 2020
France has been testing AI tools with security cameras supplied by the French technology company Datakalab in the Paris Metro system and buses in Cannes to detect the percentage of passengers who are wearing face masks. The system does not store or disseminate images and is intended to help
28 Jul 2020
A growing number of companies - for example, San Mateo start-up Camio and AI startup Actuate, which uses machine learning to identify objects and events in surveillance footage - are repositioning themselves as providers of AI software that can track workplace compliance with covid safety rules such
19 May 2020
After governments in many parts of the world began mandating wearing masks when out in public, researchers in China and the US published datasets of images of masked faces scraped from social media sites to use as training data for AI facial recognition models. Researchers from the startup
19 May 2020
Researchers are scraping social media posts for images of mask-covered faces to use to improve facial recognition algorithms. In April, researchers published to Github the COVID19 Mask Image Dataset, which contains more than 1,200 images taken from Instagram; in March, Wuhan researchers compiled the
21 Apr 2020
Many of the steps suggested in a draft programme for China-style mass surveillance in the US are being promoted and implemented as part of the government’s response to the pandemic, perhaps due to the overlap of membership between the National Security Commission on Artificial Intelligence, the body
Long Read

This article presents some of the tools and techniques deployed as part surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of peoplee and undermining peoples’ dignity, with a particular focus on the UK.

Long Read

No Tech For Tyrants, a UK grassroots organisation, is explaining Palantir's involvement with the UK government, including their partnership with the NHS. They explore the concerns public-private partnerships between Palantir and governments raise , and what this means for our rights.

Long Read

Surveillance partnerships between Amazon Ring and law-enforcement around the world create an interconnected surveillance network that poses a serious threat to our privacy and other freedoms.

02 Jun 2020
The AI firm Faculty, which worked on the Vote Leave campaign, was given a £400,000 UK government contract to analyse social media data, utility bills, and credit ratings, as well as government data, to help in the fight against the coronavirus. This is at least the ninth contract awarded to Faculty
News & Analysis

Amazon announced that they will be putting a one-year suspension on sales of its facial recognition tech to law enforcement. Here is why think there is still a long way to go.

Long Read

Palantir, the US data giant which works with intelligence and immigration enforcement agencies, has responded to our questions about its work on a highly sensitive National Health Service (NHS) project, providing some assurances, passing the buck to the NHS, and raising additional questions.

Press release

Today Privacy International and four other UK privacy organisations have sent Palantir 10 questions about their work with the UK’s National Health Service (NHS) during the Covid-19 public health crisis.

17 Mar 2020
Russia has set up a coronavirus information centre to to monitor social media for misinformation about the coronavirus and spot empty supermarket shelves using a combination of surveillance cameras and AI. The centre also has a database of contacts and places of work for 95% of those under mandatory