Examples of Abuse

Almost everyday a company or government abuses your data. Whether these abuses are intentional or the result of error, we must learn from these abuses so that we can better build tomorrow's policies and technologies. This resource is an opportunity to learn that this has all happened before, as well as a tool to query these abuses.

Please contact us if you think we are missing some key stories.

 

11 Jun 2020
Nearly six months after the emergence of the coronavirus, only 7.1% of research on COVID-19 references AI compared to 12% of research on other topics. AI is being used to make predictive analyses of patient data, especially medical scans, and analyse social media data, predict the spread of the
08 May 2020
As part of their preparations to ease the lockdown, French authorities have added AI tools into the CCTV cameras in the Paris Metro to detect the number of passengers who are wearing face masks. The system is also being used in outdoor markets and buses in Cannes. Although it is mandatory to wear a
15 Jun 2018
In June 2018, a panel set up to examine the partnerships between Alphabet's DeepMind and the UK's NHS express concern that the revenue-less AI subsidiary would eventually have to prove its value to its parent. Panel chair Julian Huppert said DeepMind should commit to a business model, either non
05 Nov 2018
Shortly before the November 2018 US midterm elections, the Center for Media and Democracy uncovered documents showing that the multi-billionaire Koch brothers have developed detailed personality profiles on 89 percent of the US population with the goal of using them to launch a private propaganda
21 Sep 2018
In 2018 a report from the Royal United Services Institute found that UK police were testing automated facial recognition, crime location prediction, and decision-making systems but offering little transparency in evaluating them. An automated facial recognition system trialled by the South Wales
30 Apr 2018
In 2018 industry insiders revealed that the gambling industry was increasingly turning to data analytics and AI to personalise their services and predict and manipulate consumer response in order to keep gamblers hooked. Based on profiles assembled by examining every click, page view, and
14 May 2018
Three months after the 2018 discovery that Google was working on Project Maven, a military pilot program intended to speed up analysis of drone footage by automating classification of images of people and objects, dozens of Google employees resigned in protest. Among their complaints: Google
10 Sep 2018
In September 2018, AI Now co-founder Meredith Whittaker sounded the alarm about the potential for abuse of the convergence of neuroscience, human enhancement, and AI in the form of brain-computer interfaces. Part of Whittaker's concern was that the only companies with the computational power
15 Oct 2018
In March 2018 the Palo Alto startup Mindstrong Health, founded by three doctors, began clinical tests of an app that uses patients' interactions with their smartphones to monitor their mental state. The app, which is being tested on people with serious illness, measures the way patients swipe, tap
26 Jul 2018
In 2018, the chair of the London Assembly's police and crime committee called on London's mayor to cut the budget of the Mayor's Office for Policing and Crime, which provides oversight, in order to pay for AI systems. The intention was that the efficiencies of adopting AI would free up officers'
21 Sep 2018
In 2017, the head of China’s security and intelligence systems, Meng Jianzhu, called on security forces to break down barriers to data sharing in order to use AI and cloud computing to find patterns that could predict and prevent terrorist attacks. Meng also called for increased integration of the
27 Sep 2018
Canada began experiments introducing automated decision-making algorithms into its immigration systems to support evaluation of some of the country's immigrant and visitor applications in 2014. In a 2018 study, Citizen Lab and NewsDeeply found that AI's use was expanding despite concerns about bias
17 May 2018
In May 2018, US Immigration and Customs Enforcement abandoned the development of machine learning software intended to mine Facebook, Twitter, and the open Internet to identify terrorists. The software, announced in the summer of 2017, had been a key element of president Donald Trump's "extreme
15 May 2018
In 2011, the US Department of Homeland Security funded research into a virtual border agent kiosk called AVATAR, for Automated Virtual Agent for Truth Assessments in Real-Time, and tested it at the US-Mexico border on low-risk travellers who volunteered to participate. In the following years, the
31 Oct 2018
In 2018, the EU announced iBorderCtrl, a six-month pilot led by the Hungarian National Police to install an automated lie detection test at four border crossing points in Hungary, Latvia, and Greece. The system uses an animated AI border agent that records travellers' faces while asking questions
18 May 2018
In May 2018, Google announced an AI system to carry out tasks such as scheduling appointments over the phone using natural language. A Duplex user wanting to make a restaurant booking, for example, could hand the task off to Duplex, which would make the phone call and negotiate times and numbers. In
28 Mar 2018
In March 2018, Facebook announced it was scrapping plans to show off new home products at its developer conference in May, in part because revelations about the use of internal advertising tools by Cambridge Analytica have angered the public. The new products were expected to include connected
Designed for use by border guards, Unisys' LineSight software uses advanced data analytics and machine learning to help border guards decide whether to inspect travellers more closely before admitting them into their country. Unisys says the software assesses each traveller's risk beginning with the
In 2012, Durham Constabulary, in partnership with computer science academics at Cambridge University, began developing the Harm Assessment Risk Tool (HART), an artificial intelligence system designed to predict whether suspects are at low, moderate, or high risk of committing further crimes in the
27 Feb 2018
Under a secret deal beginning in 2012, the data mining company Palantir provided software to a New Orleans Police Department programme that used a variety of data such as ties to gang members, criminal histories, and social media to predict the likelihood that individuals would commit acts of