Search
Content type: Advocacy
In our submission, we argue that the EDPB's opinion must take a firm approach to prevent peoples' rights being undermined by AI. We focus on the following issues in particular: The fundamentally general nature of AI models creates problems for the legitimate interest test;The risks of an overly permissive approach to the legitimate interests test;Web scraping as ‘invisible processing’ and the consequent need for transparency;Innovative technology and people’s fundamental rights;The (in)…
Content type: Advocacy
In the wake of Privacy International’s (PI) campaign against the unfettered use of Facial Recognition Technology in the UK, MPs gave inadequate responses to concerns raised by members of the public about the roll-out of this pernicious mass-surveillance technology in public spaces. Their responses also sidestep calls on them to take action.The UK is sleepwalking towards the end of privacy in public. The spread of insidious Facial Recognition Technology (FRT) in public spaces across the country…
Content type: Advocacy
Generative AI models cannot rely on untested technology to uphold people's rightsThe development of generative AI has been dependent on secretive scraping and processing of publicly available data, including personal data. However, AI companies have to date had an unacceptably poor approach towards transparency and have sought to rely on unproven ways to fulfill people's rights, such as to access, rectify, and request deletion of their dataOur view is that the ICO should adopt a stronger…
Content type: Advocacy
Privacy International (PI) welcomes the opportunity to provide input to the forthcoming report the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related tolerance to the 56th session of Human Rights Council which will examine and analyse the relationship between artificial intelligence (AI) and non-discrimination and racial equality, as well as other international human rights standards.AI applications are becoming a part of everyday life:…
Content type: Advocacy
AI-powered employment practices: PI's response to the ICO's draft recruitment and selection guidance
The volume of data collected and the methods to automate recruitment with AI poses challenges for the privacy and data protection rights of candidates going through the recruitment process.Recruitment is a complex and multi-layered process, and so is the AI technology intended to service this process at one or all stages of it. For instance, an AI-powered CV-screening tool using natural language processing (NLP) methods might collect keyword data on candidates, while an AI-powered video…
Content type: Advocacy
Why the EU AI Act fails migration
The EU AI Act seeks to provide a regulatory framework for the development and use of the most ‘risky’ AI within the European Union. The legislation outlines prohibitions for ‘unacceptable’ uses of AI, and sets out a framework of technical, oversight and accountability requirements for ‘high-risk’ AI when deployed or placed on the EU market.
Whilst the AI Act takes positive steps in other areas, the legislation is weak and even enables dangerous systems in the…
Content type: Advocacy
Generative AI models are based on indiscriminate and potentially harmful data scrapingExisting and emergent practices of web-scraping for AI is rife with problems. We are not convinced it stands up to the scrutiny and standards expected by existing law. If the balance is got wrong here, then people stand to have their right to privacy further violated by new technologies.The approach taken by the ICO towards web scraping for generative AI models may therefore have important downstream…
Content type: Advocacy
Our submission focussed on the evolving impacts of (i) automated decision-making, (ii) the digitisation of social protection programmes, (iii) sensitive data-processing and (iv) assistive technologies in the experiences and rights of people with disabilities.We called on the OHCHR to:Examine the impact that growing digitisation and the use of new and emerging technologies across sectors has upon the rights of persons with disabilities;Urge states to ensure that the deployment of digital…
Content type: Advocacy
We submitted a report to the Commission of Jurists on the Brazilian Artificial Intelligence Bill focussed on highlighting the potential harms associated with the use of AI within schools and the additional safeguards and precautions that should be taken when implementing AI in educational technology.The use of AI in education technology and schools has the potential to interfere with the child’s right to education and the right to privacy which are upheld by international human rights standards…
Content type: Long Read
On 12 April 2020, citing confidential documents, the Guardian reported Palantir would be involved in a Covid-19 data project which "includes large volumes of data pertaining to individuals, including protected health information, Covid-19 test results, the contents of people’s calls to the NHS health advice line 111 and clinical information about those in intensive care".
It cited a Whitehall source "alarmed at the “unprecedented” amounts of confidential health information being swept up in the…
Content type: Advocacy
On November 1, 2019, we submitted evidence to an inquiry carried out by the Scottish Parliament into the use of Facial Recognition Technology (FRT) for policing purposes.
In our submissions, we noted that the rapid advances in the field of artificial intelligence and machine learning, and the deployment of new technologies that seek to analyse, identify, profile and predict, by police, have and will continue to have a seismic impact on the way society is policed.
The implications come not…
Content type: Advocacy
During its 98th session, from 23 April to 10 May 2019, the UN Committee on the Elimination of Racial Discrimination (CERD) initiated the drafting process of general recommendation n° 36 on preventing and combatting racial profiling.
As part of this process, CERD invited stakeholders, including States, UN and regional human rights mechanisms, UN organisations or specialised agencies, National Human Rights Institutions, Non-Governmental Organisations (NGOs), research…
Content type: Advocacy
The feedback in this document was submitted as part of an open Request for Information (RFI) process regarding the document created by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems ("The IEEE Global Initiative") titled, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.
Content type: Advocacy
Privacy International's response to the inquiry by the House of Lords Select Committee on Artificial Intelligence.