Artificial Intelligence and its applications are a part of everyday life: from social media newsfeeds to mediating traffic flow in cities, from autonomous cars to connected consumer devices like smart assistants, spam filters, voice recognition systems, and search engines.

AI has the potential to revolutionise societies in many ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights, including the right to privacy.

AI-driven consumer products and autonomous systems are frequently equipped with sensors that generate and collect vast amounts of data without the knowledge or consent of those in their proximity. AI methods are being used to identify people who wish to remain anonymous; to infer and generate sensitive information about people from non-sensitive data; to profile people based upon population-scale data; and to make consequential decisions using this data, some of which profoundly affect people’s lives.

What is the problem

The range and diversity of AI applications means that the problems and risks are manifold. These include:

  • Re-identification and de-anonymisation: AI applications can be used to identify and thereby track individuals across different devices, in their homes, at work, and in public spaces. For example, while personal data is routinely (pseudo-)anonymised within datasets, AI can be employed to de-anonymise this data. Facial recognition is another means by which individuals can be tracked and identified, which has the potential to transform expectations of anonymity in public space.
  • Discrimination, unfairness, inaccuracies, bias: AI-driven identification, profiling, and automated decision-making may also lead to unfair, discriminatory, or biased outcomes. People can be misclassified, misidentified, or judged negatively, and such errors or biases may disproportionately affect certain groups of people.
  • Opacity and secrecy of profiling: Some applications of AI can be opaque to individuals, regulators, or even the designers of the system themselves, making it difficult to challenge or interrogate outcomes. While there are technical solutions to improving the interpretability and/or the ability to audit of some systems for different stakeholders, a key challenge remains where this is not possible, and the outcome has significant impacts on people’s lives.
  • Data exploitation: People are often unable to fully understand what kinds and how much data their devices, networks, and platforms generate, process, or share. As we bring smart and connected devices into our homes, workplaces, public spaces, and even bodies, the need to enforce limits on data exploitation becomes increasingly pressing. In this landscape, uses of AI for purposes like profiling, or to track and identify people across devices and even in public spaces, amplify this asymmetry.

What is the solution

The development, use, research, and development of AI must be subject to the minimum requirement of respecting, promoting, and protecting international human rights standards.

Different types of AI and different domains of application raise specific ethical and regulatory human rights issues. In order to ensure that they protect individuals from the risks posed by AI as well as address the potential collective and societal harms, existing laws must be reviewed, and if necessary strengthened, to address the effects of new and emerging threats to rights, including establishing clear limits, safeguards and oversight and accountability mechanisms.

What PI is doing

Many areas of PI’s work touch on AI applications in different contexts, including in advertisingwelfare and migration.

Across these areas and more we are working with policy makers, regulators and other civil society organisations to seek to ensure that there are adequate safeguards accompanied by oversight and accountability mechanisms.

Here are some examples:

  • We investigate the creeping use of facial recognition technology across the world; work with community groups, activists, and others to raise awareness about the technology and what they can do about it; and push national and international bodies to listen to peoples’ concerns and take steps to protect rights.
  • We have launched a legal challenge against UK intelligence agency M15, over their handling of vast troves of personal data in an opaque ‘technical environment’.
  • We scrutinise invasive and often hidden profiling practices, whether by data brokers, on mental health websites or by law enforcementand how these can be used to target people. We challenge these practices as well as providing recommendations to policy-makers.
  • We demand transparency on the use AI applications, whether by companies such as Palantir in response to the Covid-19 crisis, by those developing digital identity ‘solutions’, by political parties in their digital campaigns or public authorities, such as the UK National Health Service as part of their contract with Amazon.
Advocacy

The European Data Protection Board (EDPB) is preparing an opinion on AI models, following a request from the Irish Data Protection Commission. This opinion is expected to cover how personal data is processed at various stages of the training and operation of an AI model and what legal basis can be relied on for that processing. 

PI submitted its views to the Board ahead of it releasing its opinion. 

13 Aug 2024
Almost half of all job seekers are using AI tools such as ChatGPT and Gemini to help them write CVs and cover letters and complete assessments, flooding employers and recruiters with applications in an already-tight market. Managers have said they can spot giveaways that applicants used AI, such as
Advocacy

In this piece, we unpack the responses received from UK Members of Parliament between November 2023 and June 2024 following the initial launch of our campaign “The End of Privacy in Public”, and discuss the current state of regulation of FRT in the UK. In doing so, we reiterate our call for FRT’s use to be effectively regulated.

22 May 2024
The UK's Department of Education intends to appoint a project team to test edtech against set criteria to choose the highest-quality and most useful products. Extra training will be offered to help teachers develop enhanced skills. Critics suggest it would be better to run a consultation first to
28 Aug 2024
The UK's new Labour government are giving AI models special access to the Department of Education's bank of resources in order to encourage technology companies to create better AI tools to reduce teachers' workloads. A competition for the best ideas will award an additional £1 million in
02 Mar 2024
The Utah State Board of Education has approved a $3 million contract with Utah-based AEGIX Global that will let K-12 schools in the state apply for funding for AI gun detection software from ZeroEyes for up to four cameras per school. The software will work with the schools' existing camera systems
Long Read

A glimpse into what you can find in the new version of PI’s Guide to International Law and Surveillance. From surveillance of public spaces to spyware and encryption, it’s got everything!

Explainer

Who are the workers behind the training datasets powering the biggest LLMs on the market? In this explainer, we delve into data labeling as part of the AI supply chain, the labourers behind this data labeling, and how this exploitative labour ecosystem functions, aided by algorithms and larger systemic governance issues that exploit microworkers in the gig economy.

Explainer

This explainer takes a look at the main ways in which large language models (LLMs) threaten your privacy and data protection rights.

Advocacy

PI responded to the ICO consultation on engineering individual rights into generative AI models such as LLMs. Our overall assessment is that the major generative AI models are unable to uphold individuals’ rights under the UK GDPR. New technologies designed in a way that cannot uphold people’s rights cannot be permitted just for the sake of innovation.

News & Analysis

A year and a half after OpenAI first released ChatGPT, generative AI still makes the headlines and gathers attention and capital. But what is the future of this technology in an online economy dominated by surveillance capitalism? And how can we expect it to impact our online lives and privacy?

Advocacy

Privacy International submitted its input to the UN Special Rapporteur on racism for their upcoming report which will examine and analyse the relationship between artificial intelligence (AI) and non-discrimination and racial equality, as well as other international human rights standards.

Advocacy

One of the sectors to integrate AI-powered tools into their day-to-day operations is the employment and recruitment sector. PI has responded to the ICO's recent consultation on its draft guidance for employers and recruiters on deploying AI in recruitment. Our response focuses on the processor/controller designation of recruiters and the third party LLMs they outsource and candidates' employment rights that may be undermined by algorithmic decision-making (ADM).

Advocacy

The final text of the EU AI Act adopted by the European Parliament on 13 March 2024 fails to prevent tech-enabled harm to migrants and provide protection for people on the move.

Advocacy

PI responded to the ICO consultation on the legality of web scraping by AI developers when producing generative AI models such as LLMs. Developers are known to scrape enormous amounts of data from the web in order to train their models on different types of human-generated content. But data collection by AI web-scrapers can be indiscriminate and the outputs of generative AI models can be unpredictable and potentially harmful.

Press release

Privacy International (PI) has just published new research into UK Members of Parliament’s (startling lack of) knowledge on the use of Facial Recognition Technology (FRT) in public spaces, even within their own constituencies.