Search
Content type: News & Analysis
Yesterday, Amazon announced that they will be putting a one-year suspension on sales of its facial recognition software Rekognition to law enforcement. While Amazon’s move should be welcomed as a step towards sanctioning company opportunism at the expense of our fundamental freedoms, there is still a lot to be done.
The announcement speaks of just a one-year ban. What is Amazon exactly expecting to change within that one year? Is one year enough to make the technology to not discriminate…
Content type: Long Read
On 12 April 2020, citing confidential documents, the Guardian reported Palantir would be involved in a Covid-19 data project which "includes large volumes of data pertaining to individuals, including protected health information, Covid-19 test results, the contents of people’s calls to the NHS health advice line 111 and clinical information about those in intensive care".
It cited a Whitehall source "alarmed at the “unprecedented” amounts of confidential health information being swept up in the…
Content type: Press release
Photo by Ashkan Forouzani on Unsplash
Today Privacy International, Big Brother Watch, medConfidential, Foxglove, and Open Rights Group have sent Palantir 10 questions about their work with the UK’s National Health Service (NHS) during the Covid-19 public health crisis and have requested for the contract to be disclosed.
On its website Palantir says that the company has a “culture of open and critical discussion around the implications of [their] technology” but the company have so far…
Content type: Examples
Russia has set up a coronavirus information centre to to monitor social media for misinformation about the coronavirus and spot empty supermarket shelves using a combination of surveillance cameras and AI. The centre also has a database of contacts and places of work for 95% of those under mandatory quarantine after returning from countries where the virus is active. Sherbank, Russia's biggest bank, has agreed to pay for a free app that will provide free telemedicine consultations.
Source:…
Content type: News & Analysis
In mid-2019, MI5 admitted, during a case brought by Liberty, that personal data was being held in “ungoverned spaces”. Much about these ‘ungoverned spaces’, and how they would effectively be “governed” in the future, remained unclear. At the moment, they are understood to be a ‘technical environment’ where personal data of unknown numbers of individuals was being ‘handled’. The use of ‘technical environment’ suggests something more than simply a compilation of a few datasets or databases.
The…
Content type: Advocacy
On November 1, 2019, we submitted evidence to an inquiry carried out by the Scottish Parliament into the use of Facial Recognition Technology (FRT) for policing purposes.
In our submissions, we noted that the rapid advances in the field of artificial intelligence and machine learning, and the deployment of new technologies that seek to analyse, identify, profile and predict, by police, have and will continue to have a seismic impact on the way society is policed.
The implications come not…
Content type: Examples
The US Department of Homeland Security awarded a $113 million contract to General Dynamics to carry out the Visa Lifecycle Vetting Initiative (VLVI), a renamed version of the Extreme Vetting Initiative and part of a larger effort called the National Vetting Enterprise. In May 2018, public outrage led the DHS to back away from a machine learning system that would monitor immigrants continuously; however, the reason it gave was that the technology to automate vetting did not yet exist. These…
Content type: Examples
VeriPol, a system developed at the UK's Cardiff University, analyses the wording of victim statements in order to help police identify fake reports. By January 2019, VeriPol was in use by Spanish police, who said it helped them identify 64 false reports in one week and was successful in more than 80% of cases. The basic claim is that AI can find patterns that are common to false statements; among the giveaways experts say that false statements are likely to be shorter than genuine ones, focus…
Content type: Examples
In October 2018, the Singapore-based startup LenddoEFL was one of a group of microfinance startups aimed at the developing world that used non-traditional types of data such as behavioural traits and smartphone habits for credit scoring. Lenddo's algorithm uses numerous data points, including the number of words a person uses in email subject lines, the percentage of photos in a smartphone's library that were taken with a front-facing camera, and whether they regularly use financial apps on…
Content type: Examples
In November 2018, tests began of the €4.5 million iBorderCtrl project, which saw AI-powered lie detectors installed at airports in Hungary, Latvia, and Greece to question passengers travelling from outside the EU. The AI questioner was set to ask each passenger to confirm their name, age, and date of birth, and then query them about the purpose of their trip and who is paying for it. If the AI believes the person is lying, it is designed to change its tone of voice to become "more skeptical"…
Content type: Examples
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating. Predictim based its scores in part on Facebook, Twitter, and Instagram posts - applicants were required to share broad access to their accounts - and offered no explanation of how it reached its risk…
Content type: Examples
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs, including access to jobs, medical facilities, schools, and nearby migrants who speak the same language. Known as "Annie MOORE", refugees matched by the program were finding jobs within 90 days about a…
Content type: Advocacy
During its 98th session, from 23 April to 10 May 2019, the UN Committee on the Elimination of Racial Discrimination (CERD) initiated the drafting process of general recommendation n° 36 on preventing and combatting racial profiling.
As part of this process, CERD invited stakeholders, including States, UN and regional human rights mechanisms, UN organisations or specialised agencies, National Human Rights Institutions, Non-Governmental Organisations (NGOs), research…
Content type: Examples
In 2017, US Immigration & Customs Enforcement (ICE) announced that it would seek to use artificial intelligence to automatically evaluate the probability of a prospective immigrant “becoming a positively contributing member of society.” In a letter to acting Department of Homeland Security Secretary Elaine Duke, a group of 54 concerned computer scientists, engineers, mathematicians, and researchers objected to ICE’s proposal and demanded that ICE abandon this approach because it would be…
Content type: News & Analysis
Profiling and Automated Decision Making: Is Artificial Intelligence Violating Your Right to Privacy?
Image source creative commons.
The below piece was originally posted on the UNRISD site here.
How AI is affecting our human rights
Artificial Intelligence (AI) is part of our daily lives. Its many applications inform almost all sectors of society: from the way we interact on social media, to the way traffic flows are managed in cities; from access to credit and to social services, to the functioning of our ever expanding number of devices connected to the internet.
AI is affecting our human…
Content type: News & Analysis
This piece originally appeared here.
Creative Commons Photo Credit: Source
Tech competition is being used to push a dangerous corporate agenda.
High-tech industries have become the new battlefield as the United States and China clash over tariffs and trade deficits. It’s a new truism that the two countries are locked in a race for dominance in artificial intelligence and that data could drive the outcome.
In this purported race for technological high ground, the argument often goes, China…
Content type: Report
Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data.
AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological…
Content type: Examples
Computer programs that perform risk assessments of crime suspects are increasingly common in American courtrooms, and are used at every stage of the criminal justice systems to determine who may be set free or granted parole, and the size of the bond they must pay. By 2016, the results of these assessments were given to judges during criminal sentencing and a sentencing reform bill was proposed in Congress to mandate the use of such assessments in federal prisons. In a study of the risk scores…
Content type: Examples
In 2012, London Royal Free, Barnet, and Chase Farm hospitals agreed to provide Google's DeepMind subsidiary with access to an estimated 1.6 million NHS patient records, including full names and medical histories. The company claimed the information, which would remain encrypted so that employees could not identify individual patients, would be used to develop a system for flagging patients at risk of acute kidney injuries, a major reason why people need emergency care. Privacy campaigners…
Content type: Press release
Full report is available here
ARTICLE 19 and Privacy International’s report provides an overview of the impact of AI technologies on freedom of expression and privacy. It calls for further study and monitoring of how AI tools impact human rights.
Specifically, we call on states and companies to:
Ensure protection of international human rights standards. All AI tools must be subject to laws, regulations, and ethical codes which meet the threshold set by international standards.…
Content type: Examples
Few people realise how many databases may include images of their face; these may be owned by data brokers, social media companies such as Facebook and Snapchat, and governments. The systems in use by Snap and the Chinese start-up Face++ don't save facial images, but map detailed points on faces and store that data instead. The FBI's latest system, as of 2017, gave it the ability to scan the images of millions of ordinary Americans collected from millions of mugshots and the driver's licence…
Content type: Examples
By 2017, facial recognition was developing quickly in China and was beginning to become embedded in payment and other systems. The Chinese startup Face++, valued at roughly $1 billion, supplies facial recognition software to Alipay, a mobile payment app used by more than 120 million people; the dominant Chinese ride-hailing service, Didi; and several other popular apps. The Chinese search engine Baidu is working with the government of popular tourist destination Wuzhen to enable visitors to…
Content type: Advocacy
The feedback in this document was submitted as part of an open Request for Information (RFI) process regarding the document created by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems ("The IEEE Global Initiative") titled, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.
Content type: News & Analysis
Privacy International’s Data Exploitation Programme Lead was invited to give evidence to the Select Committee on Artificial Intelligence at the U.K. House of Lords. The session sought to establish how personal data should be owned, managed, valued, and used for the benefit of society.
Link to the announcement of the session: https://www.parliament.uk/business/committees/committees-a-z/lords-select/ai-committee/news-parliament-2017/data-evidence-session/
Full video recording of the session:…
Content type: Advocacy
Privacy International's response to the inquiry by the House of Lords Select Committee on Artificial Intelligence.
Content type: Long Read
Tech firms and governments are keen to use algorithms and AI, everywhere. We urgently need to understand what algorithms, intelligence, and machine learning actually are so that we can disentangle the optimism from the hype. It will also ensure that we come up with meaningful responses and ultimately protections and safeguards.
Many technologists emerge from University, College or graduate courses with the impression that technology is neutral and believe that all systems they apply their…