Search
Content type: Advocacy
In the wake of Privacy International’s (PI) campaign against the unfettered use of Facial Recognition Technology in the UK, MPs gave inadequate responses to concerns raised by members of the public about the roll-out of this pernicious mass-surveillance technology in public spaces. Their responses also sidestep calls on them to take action.The UK is sleepwalking towards the end of privacy in public. The spread of insidious Facial Recognition Technology (FRT) in public spaces across the country…
Content type: Long Read
The fourth edition of PI’s Guide to International Law and Surveillance provides the most hard-hitting past and recent results on international human rights law that reinforce the core human rights principles and standards on surveillance. We hope that it will continue helping researchers, activists, journalists, policymakers, and anyone else working on these issues.The new edition includes, among others, entries on (extra)territorial jurisdiction in surveillance, surveillance of public…
Content type: Advocacy
Privacy International (PI) welcomes the opportunity to provide input to the forthcoming report the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related tolerance to the 56th session of Human Rights Council which will examine and analyse the relationship between artificial intelligence (AI) and non-discrimination and racial equality, as well as other international human rights standards.AI applications are becoming a part of everyday life:…
Content type: Press release
9 November 2023 - Privacy International (PI) has just published new research into UK Members of Parliament’s (startling lack of) knowledge on the use of Facial Recognition Technology (FRT) in public spaces, even within their own constituencies. Read the research published here in full: "MPs Asleep at the Wheel as Facial Recognition Technology Spells The End of Privacy in Public".PI has recently conducted a survey of 114 UK MPs through YouGov. Published this morning, the results are seriously…
Content type: Advocacy
We submitted a report to the Commission of Jurists on the Brazilian Artificial Intelligence Bill focussed on highlighting the potential harms associated with the use of AI within schools and the additional safeguards and precautions that should be taken when implementing AI in educational technology.The use of AI in education technology and schools has the potential to interfere with the child’s right to education and the right to privacy which are upheld by international human rights standards…
Content type: News & Analysis
What if we told you that every photo of you, your family, and your friends posted on your social media or even your blog could be copied and saved indefinitely in a database with billions of images of other people, by a company you've never heard of? And what if we told you that this mass surveillance database was pitched to law enforcement and private companies across the world?
This is more or less the business model and aspiration of Clearview AI, a company that only received worldwide…
Content type: Examples
France has been testing AI tools with security cameras supplied by the French technology company Datakalab in the Paris Metro system and buses in Cannes to detect the percentage of passengers who are wearing face masks. The system does not store or disseminate images and is intended to help authorities anticipate future oubreaks.
https://www.theguardian.com/world/2020/jun/18/coronavirus-mass-surveillance-could-be-here-to-stay-tracking
Writer: Oliver Holmes, Justin McCurry, and Michael Safi…
Content type: Examples
After governments in many parts of the world began mandating wearing masks when out in public, researchers in China and the US published datasets of images of masked faces scraped from social media sites to use as training data for AI facial recognition models. Researchers from the startup Workaround, who published the COVID19 Mask image Dataset to Github in April 2020 claimed the images were not private because they were posted on Instagram and therefore permission from the posters was not…
Content type: Long Read
Over the last two decades we have seen an array of digital technologies being deployed in the context of border controls and immigration enforcement, with surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of people and undermining peoples’ dignity.
And yet this is happening with little public scrutiny, often in a regulatory or legal void and without understanding and consideration to the impact on migrant communities at the border and…
Content type: Long Read
In April 2018, Amazon acquired “Ring”, a smart security device company best known for its video doorbell, which allows Ring users to see, talk to, and record people who come to their doorsteps.
What started out as a company pitch on Shark Tank in 2013, led to the $839 million deal, which has been crucial for Amazon to expand on their concept of the XXI century smart home. It’s not just about convenience anymore, interconnected sensors and algorithms promise protection and provide a feeling of…
Content type: News & Analysis
Yesterday, Amazon announced that they will be putting a one-year suspension on sales of its facial recognition software Rekognition to law enforcement. While Amazon’s move should be welcomed as a step towards sanctioning company opportunism at the expense of our fundamental freedoms, there is still a lot to be done.
The announcement speaks of just a one-year ban. What is Amazon exactly expecting to change within that one year? Is one year enough to make the technology to not discriminate…
Content type: Press release
Photo by Ashkan Forouzani on Unsplash
Today Privacy International, Big Brother Watch, medConfidential, Foxglove, and Open Rights Group have sent Palantir 10 questions about their work with the UK’s National Health Service (NHS) during the Covid-19 public health crisis and have requested for the contract to be disclosed.
On its website Palantir says that the company has a “culture of open and critical discussion around the implications of [their] technology” but the company have so far…
Content type: Advocacy
On November 1, 2019, we submitted evidence to an inquiry carried out by the Scottish Parliament into the use of Facial Recognition Technology (FRT) for policing purposes.
In our submissions, we noted that the rapid advances in the field of artificial intelligence and machine learning, and the deployment of new technologies that seek to analyse, identify, profile and predict, by police, have and will continue to have a seismic impact on the way society is policed.
The implications come not…
Content type: Examples
Few people realise how many databases may include images of their face; these may be owned by data brokers, social media companies such as Facebook and Snapchat, and governments. The systems in use by Snap and the Chinese start-up Face++ don't save facial images, but map detailed points on faces and store that data instead. The FBI's latest system, as of 2017, gave it the ability to scan the images of millions of ordinary Americans collected from millions of mugshots and the driver's licence…
Content type: Examples
By 2017, facial recognition was developing quickly in China and was beginning to become embedded in payment and other systems. The Chinese startup Face++, valued at roughly $1 billion, supplies facial recognition software to Alipay, a mobile payment app used by more than 120 million people; the dominant Chinese ride-hailing service, Didi; and several other popular apps. The Chinese search engine Baidu is working with the government of popular tourist destination Wuzhen to enable visitors to…