Search
Content type: Advocacy
In our submission, we argue that the EDPB's opinion must take a firm approach to prevent peoples' rights being undermined by AI. We focus on the following issues in particular: The fundamentally general nature of AI models creates problems for the legitimate interest test;The risks of an overly permissive approach to the legitimate interests test;Web scraping as ‘invisible processing’ and the consequent need for transparency;Innovative technology and people’s fundamental rights;The (in)…
Content type: Examples
Almost half of all job seekers are using AI tools such as ChatGPT and Gemini to help them write CVs and cover letters and complete assessments, flooding employers and recruiters with applications in an already-tight market. Managers have said they can spot giveaways that applicants used AI, such as US-style grammar and bland, impersonal language. Those whose AI-padded applications perform best are those who have paid for ChatGPT - overwhelmingly from higher socio-economic backgrounds, male, and…
Content type: Advocacy
In the wake of Privacy International’s (PI) campaign against the unfettered use of Facial Recognition Technology in the UK, MPs gave inadequate responses to concerns raised by members of the public about the roll-out of this pernicious mass-surveillance technology in public spaces. Their responses also sidestep calls on them to take action.The UK is sleepwalking towards the end of privacy in public. The spread of insidious Facial Recognition Technology (FRT) in public spaces across the country…
Content type: Examples
The UK's Department of Education intends to appoint a project team to test edtech against set criteria to choose the highest-quality and most useful products. Extra training will be offered to help teachers develop enhanced skills. Critics suggest it would be better to run a consultation first to work out what schools and teachers want.Link to article Publication: Schools WeekWriter: Lucas Cumiskey
Content type: Examples
The UK's new Labour government are giving AI models special access to the Department of Education's bank of resources in order to encourage technology companies to create better AI tools to reduce teachers' workloads. A competition for the best ideas will award an additional £1 million in development funds. Link to article Publication: GuardianWriter: Richard Adams
Content type: Examples
The Utah State Board of Education has approved a $3 million contract with Utah-based AEGIX Global that will let K-12 schools in the state apply for funding for AI gun detection software from ZeroEyes for up to four cameras per school. The software will work with the schools' existing camera systems, and notifies police when the detection of a firearm is verified at the ZeroEyes control centre. The legislature will consider additional funding if the early implementation is successful. The…
Content type: Long Read
The fourth edition of PI’s Guide to International Law and Surveillance provides the most hard-hitting past and recent results on international human rights law that reinforce the core human rights principles and standards on surveillance. We hope that it will continue helping researchers, activists, journalists, policymakers, and anyone else working on these issues.The new edition includes, among others, entries on (extra)territorial jurisdiction in surveillance, surveillance of public…
Content type: Explainer
Behind every machine is a human person who makes the cogs in that machine turn - there's the developer who builds (codes) the machine, the human evaluators who assess the basic machine's performance, even the people who build the physical parts for the machine. In the case of large language models (LLMs) powering your AI systems, this 'human person' is the invisible data labellers from all over the world who are manually annotating datasets that train the machine to recognise what is the colour…
Content type: Explainer
IntroductionThe emergence of large language models (LLMs) in late 2022 has changed people’s understanding of, and interaction with, artificial intelligence (AI). New tools and products that use, or claim to use, AI can be found for almost every purpose – they can write you a novel, pretend to be your girlfriend, help you brush your teeth, take down criminals or predict the future. But LLMs and other similar forms of generative AI create risks – not just big theoretical existential ones – but…
Content type: Advocacy
Generative AI models cannot rely on untested technology to uphold people's rightsThe development of generative AI has been dependent on secretive scraping and processing of publicly available data, including personal data. However, AI companies have to date had an unacceptably poor approach towards transparency and have sought to rely on unproven ways to fulfill people's rights, such as to access, rectify, and request deletion of their dataOur view is that the ICO should adopt a stronger…
Content type: News & Analysis
Is the AI hype fading? Consumer products with AI assistant are disappointing across the board, Tech CEOs are struggling to give examples of use cases to justify spending billions into Graphics Processing Units (GPUs) and models training. Meanwhile, data protection concerns are still a far cry from having been addressed.
Yet, the believers remain. OpenAI's presentation of ChatGPT was reminiscent of the movie Her (with Scarlett Johannsen's voice even being replicated a la the movie), Google…
Content type: Advocacy
Privacy International (PI) welcomes the opportunity to provide input to the forthcoming report the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related tolerance to the 56th session of Human Rights Council which will examine and analyse the relationship between artificial intelligence (AI) and non-discrimination and racial equality, as well as other international human rights standards.AI applications are becoming a part of everyday life:…
Content type: Advocacy
AI-powered employment practices: PI's response to the ICO's draft recruitment and selection guidance
The volume of data collected and the methods to automate recruitment with AI poses challenges for the privacy and data protection rights of candidates going through the recruitment process.Recruitment is a complex and multi-layered process, and so is the AI technology intended to service this process at one or all stages of it. For instance, an AI-powered CV-screening tool using natural language processing (NLP) methods might collect keyword data on candidates, while an AI-powered video…
Content type: Advocacy
Why the EU AI Act fails migration
The EU AI Act seeks to provide a regulatory framework for the development and use of the most ‘risky’ AI within the European Union. The legislation outlines prohibitions for ‘unacceptable’ uses of AI, and sets out a framework of technical, oversight and accountability requirements for ‘high-risk’ AI when deployed or placed on the EU market.
Whilst the AI Act takes positive steps in other areas, the legislation is weak and even enables dangerous systems in the…
Content type: Advocacy
Generative AI models are based on indiscriminate and potentially harmful data scrapingExisting and emergent practices of web-scraping for AI is rife with problems. We are not convinced it stands up to the scrutiny and standards expected by existing law. If the balance is got wrong here, then people stand to have their right to privacy further violated by new technologies.The approach taken by the ICO towards web scraping for generative AI models may therefore have important downstream…
Content type: Press release
9 November 2023 - Privacy International (PI) has just published new research into UK Members of Parliament’s (startling lack of) knowledge on the use of Facial Recognition Technology (FRT) in public spaces, even within their own constituencies. Read the research published here in full: "MPs Asleep at the Wheel as Facial Recognition Technology Spells The End of Privacy in Public".PI has recently conducted a survey of 114 UK MPs through YouGov. Published this morning, the results are seriously…
Content type: Advocacy
Our submission focussed on the evolving impacts of (i) automated decision-making, (ii) the digitisation of social protection programmes, (iii) sensitive data-processing and (iv) assistive technologies in the experiences and rights of people with disabilities.We called on the OHCHR to:Examine the impact that growing digitisation and the use of new and emerging technologies across sectors has upon the rights of persons with disabilities;Urge states to ensure that the deployment of digital…
Content type: Advocacy
We submitted a report to the Commission of Jurists on the Brazilian Artificial Intelligence Bill focussed on highlighting the potential harms associated with the use of AI within schools and the additional safeguards and precautions that should be taken when implementing AI in educational technology.The use of AI in education technology and schools has the potential to interfere with the child’s right to education and the right to privacy which are upheld by international human rights standards…
Content type: News & Analysis
What if we told you that every photo of you, your family, and your friends posted on your social media or even your blog could be copied and saved indefinitely in a database with billions of images of other people, by a company you've never heard of? And what if we told you that this mass surveillance database was pitched to law enforcement and private companies across the world?
This is more or less the business model and aspiration of Clearview AI, a company that only received worldwide…
Content type: News & Analysis
Last month, the World Health Organization published its guidance on Ethics and Governance of Artificial Intelligence for Health. Privacy International was one of the organisations that was tasked with reviewing the report. We want to start by acknowledging that this report is a very thorough one that does not shy away from acknowledging the risks and limitations of the use of AI in healthcare. As it is often the case with guidance notes of this kind, its effectiveness will depend on the…
Content type: Examples
France has been testing AI tools with security cameras supplied by the French technology company Datakalab in the Paris Metro system and buses in Cannes to detect the percentage of passengers who are wearing face masks. The system does not store or disseminate images and is intended to help authorities anticipate future oubreaks.
https://www.theguardian.com/world/2020/jun/18/coronavirus-mass-surveillance-could-be-here-to-stay-tracking
Writer: Oliver Holmes, Justin McCurry, and Michael Safi…
Content type: Examples
A growing number of companies - for example, San Mateo start-up Camio and AI startup Actuate, which uses machine learning to identify objects and events in surveillance footage - are repositioning themselves as providers of AI software that can track workplace compliance with covid safety rules such as social distancing and wearing masks. Amazon developed its own social distancing tracking technology for internal use in its warehouses and other buildings, and is offering it as a free tool to…
Content type: Examples
After governments in many parts of the world began mandating wearing masks when out in public, researchers in China and the US published datasets of images of masked faces scraped from social media sites to use as training data for AI facial recognition models. Researchers from the startup Workaround, who published the COVID19 Mask image Dataset to Github in April 2020 claimed the images were not private because they were posted on Instagram and therefore permission from the posters was not…
Content type: Examples
Researchers are scraping social media posts for images of mask-covered faces to use to improve facial recognition algorithms. In April, researchers published to Github the COVID19 Mask Image Dataset, which contains more than 1,200 images taken from Instagram; in March, Wuhan researchers compiled the Real World Masked Face Dataset, a database of more than 5,000 photos of 525 people they found online. The researchers have justified the appropriation by saying images posted to Instagram are public…
Content type: Examples
Many of the steps suggested in a draft programme for China-style mass surveillance in the US are being promoted and implemented as part of the government’s response to the pandemic, perhaps due to the overlap of membership between the National Security Commission on Artificial Intelligence, the body that drafted the programme, and the advisory task forces charged with guiding the government’s plans to reopen the economy. The draft, obtained by EPIC in a FOIA request, is aimed at ensuring that…
Content type: Long Read
Over the last two decades we have seen an array of digital technologies being deployed in the context of border controls and immigration enforcement, with surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of people and undermining peoples’ dignity.
And yet this is happening with little public scrutiny, often in a regulatory or legal void and without understanding and consideration to the impact on migrant communities at the border and…
Content type: Long Read
What Do We Know?
Palantir & the NHS
What You Don’t Know About Palantir in the UK
Steps We’re Taking
The Way Forward
This article was written by No Tech For Tyrants - an organisation that works on severing links between higher education, violent tech & hostile immigration environments.
Content type: Long Read
In April 2018, Amazon acquired “Ring”, a smart security device company best known for its video doorbell, which allows Ring users to see, talk to, and record people who come to their doorsteps.
What started out as a company pitch on Shark Tank in 2013, led to the $839 million deal, which has been crucial for Amazon to expand on their concept of the XXI century smart home. It’s not just about convenience anymore, interconnected sensors and algorithms promise protection and provide a feeling of…
Content type: Examples
The AI firm Faculty, which worked on the Vote Leave campaign, was given a £400,000 UK government contract to analyse social media data, utility bills, and credit ratings, as well as government data, to help in the fight against the coronavirus. This is at least the ninth contract awarded to Faculty since 2018, for a total of at least £1.6 million. No other firm was asked to bid on the contract, as normal public bodies’ requirements for competitive procurement have been waived in the interests…