Search
Content type: Advocacy
In our submission, we argue that the EDPB's opinion must take a firm approach to prevent peoples' rights being undermined by AI. We focus on the following issues in particular: The fundamentally general nature of AI models creates problems for the legitimate interest test;The risks of an overly permissive approach to the legitimate interests test;Web scraping as ‘invisible processing’ and the consequent need for transparency;Innovative technology and people’s fundamental rights;The (in)…
Content type: Examples
Almost half of all job seekers are using AI tools such as ChatGPT and Gemini to help them write CVs and cover letters and complete assessments, flooding employers and recruiters with applications in an already-tight market. Managers have said they can spot giveaways that applicants used AI, such as US-style grammar and bland, impersonal language. Those whose AI-padded applications perform best are those who have paid for ChatGPT - overwhelmingly from higher socio-economic backgrounds, male, and…
Content type: Advocacy
In the wake of Privacy International’s (PI) campaign against the unfettered use of Facial Recognition Technology in the UK, MPs gave inadequate responses to concerns raised by members of the public about the roll-out of this pernicious mass-surveillance technology in public spaces. Their responses also sidestep calls on them to take action.The UK is sleepwalking towards the end of privacy in public. The spread of insidious Facial Recognition Technology (FRT) in public spaces across the country…
Content type: Examples
The UK's Department of Education intends to appoint a project team to test edtech against set criteria to choose the highest-quality and most useful products. Extra training will be offered to help teachers develop enhanced skills. Critics suggest it would be better to run a consultation first to work out what schools and teachers want.Link to article Publication: Schools WeekWriter: Lucas Cumiskey
Content type: Examples
The UK's new Labour government are giving AI models special access to the Department of Education's bank of resources in order to encourage technology companies to create better AI tools to reduce teachers' workloads. A competition for the best ideas will award an additional £1 million in development funds. Link to article Publication: GuardianWriter: Richard Adams
Content type: Examples
The Utah State Board of Education has approved a $3 million contract with Utah-based AEGIX Global that will let K-12 schools in the state apply for funding for AI gun detection software from ZeroEyes for up to four cameras per school. The software will work with the schools' existing camera systems, and notifies police when the detection of a firearm is verified at the ZeroEyes control centre. The legislature will consider additional funding if the early implementation is successful. The…
Content type: Long Read
The fourth edition of PI’s Guide to International Law and Surveillance provides the most hard-hitting past and recent results on international human rights law that reinforce the core human rights principles and standards on surveillance. We hope that it will continue helping researchers, activists, journalists, policymakers, and anyone else working on these issues.The new edition includes, among others, entries on (extra)territorial jurisdiction in surveillance, surveillance of public…
Content type: Explainer
Behind every machine is a human person who makes the cogs in that machine turn - there's the developer who builds (codes) the machine, the human evaluators who assess the basic machine's performance, even the people who build the physical parts for the machine. In the case of large language models (LLMs) powering your AI systems, this 'human person' is the invisible data labellers from all over the world who are manually annotating datasets that train the machine to recognise what is the colour…
Content type: Explainer
IntroductionThe emergence of large language models (LLMs) in late 2022 has changed people’s understanding of, and interaction with, artificial intelligence (AI). New tools and products that use, or claim to use, AI can be found for almost every purpose – they can write you a novel, pretend to be your girlfriend, help you brush your teeth, take down criminals or predict the future. But LLMs and other similar forms of generative AI create risks – not just big theoretical existential ones – but…
Content type: Advocacy
Generative AI models cannot rely on untested technology to uphold people's rightsThe development of generative AI has been dependent on secretive scraping and processing of publicly available data, including personal data. However, AI companies have to date had an unacceptably poor approach towards transparency and have sought to rely on unproven ways to fulfill people's rights, such as to access, rectify, and request deletion of their dataOur view is that the ICO should adopt a stronger…
Content type: News & Analysis
Is the AI hype fading? Consumer products with AI assistant are disappointing across the board, Tech CEOs are struggling to give examples of use cases to justify spending billions into Graphics Processing Units (GPUs) and models training. Meanwhile, data protection concerns are still a far cry from having been addressed.
Yet, the believers remain. OpenAI's presentation of ChatGPT was reminiscent of the movie Her (with Scarlett Johannsen's voice even being replicated a la the movie), Google…
Content type: Advocacy
Privacy International (PI) welcomes the opportunity to provide input to the forthcoming report the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related tolerance to the 56th session of Human Rights Council which will examine and analyse the relationship between artificial intelligence (AI) and non-discrimination and racial equality, as well as other international human rights standards.AI applications are becoming a part of everyday life:…
Content type: Advocacy
AI-powered employment practices: PI's response to the ICO's draft recruitment and selection guidance
The volume of data collected and the methods to automate recruitment with AI poses challenges for the privacy and data protection rights of candidates going through the recruitment process.Recruitment is a complex and multi-layered process, and so is the AI technology intended to service this process at one or all stages of it. For instance, an AI-powered CV-screening tool using natural language processing (NLP) methods might collect keyword data on candidates, while an AI-powered video…
Content type: Advocacy
Why the EU AI Act fails migration
The EU AI Act seeks to provide a regulatory framework for the development and use of the most ‘risky’ AI within the European Union. The legislation outlines prohibitions for ‘unacceptable’ uses of AI, and sets out a framework of technical, oversight and accountability requirements for ‘high-risk’ AI when deployed or placed on the EU market.
Whilst the AI Act takes positive steps in other areas, the legislation is weak and even enables dangerous systems in the…
Content type: Advocacy
Generative AI models are based on indiscriminate and potentially harmful data scrapingExisting and emergent practices of web-scraping for AI is rife with problems. We are not convinced it stands up to the scrutiny and standards expected by existing law. If the balance is got wrong here, then people stand to have their right to privacy further violated by new technologies.The approach taken by the ICO towards web scraping for generative AI models may therefore have important downstream…