Examples of Abuse

Almost everyday a company or government abuses your data. Whether these abuses are intentional or the result of error, we must learn from these abuses so that we can better build tomorrow's policies and technologies. This resource is an opportunity to learn that this has all happened before, as well as a tool to query these abuses.

Please contact us if you think we are missing some key stories.

 

04 Oct 2017
In 2017, after protests from children's health and privacy advocates, Mattel cancelled its planned child-focused "Aristotle" smart hub. Aristotle was designed to adapt to and learn about the child as they grew while controlling devices from night lights to homework aids. However, Aristotle was only
17 Oct 2017
A mistake in Facebook's machine translation service led to the arrest and questioning of a Palestinian man by Israeli police. The man, a construction worker on the West Bank, posted a picture of himself leaning against a bulldozer like those that have been used in hit-and-run terrorist attacks, with
02 Jan 2018
In February 2018 the Canadian government announced a three-month pilot partnership with the artificial intelligence company Advanced Symbolics to monitor social media posts with a view to predicting rises in regional suicide risk. Advanced Symbolics will look for trends by analysing posts from 160
02 Jan 2018
EU antitrust regulators are studying how companies gather and use big data with a view to understanding how access to data may close off the market to smaller, newer competitors. Among the companies being scrutinised are the obvious technology companies, such as Google and Facebook, and less obvious
11 Jan 2018
In 2017, a study claimed to have shown that artificial intelligence can infer sexual orientation from facial images, reviving the kinds of claims made in the 19th century about inferring character from outer appearance. Despite widespread complaints and criticisms, the study, by Michal Kosinski and
20 Feb 2018
In 2018, pending agreement from its Institutional Review Board, the University of St Thomas in Minnesota will trial sentiment analysis software in the classroom in order to test the software, which relies on analysing the expressions on students' faces captured by a high-resolution webcam
27 Feb 2018
Under a secret deal beginning in 2012, the data mining company Palantir provided software to a New Orleans Police Department programme that used a variety of data such as ties to gang members, criminal histories, and social media to predict the likelihood that individuals would commit acts of
28 Mar 2018
In March 2018, Facebook announced it was scrapping plans to show off new home products at its developer conference in May, in part because revelations about the use of internal advertising tools by Cambridge Analytica have angered the public. The new products were expected to include connected
03 Apr 2018
In 2016, researchers discovered that the personalisation built into online advertising platforms such as Facebook is making it easy to invisibly bypass anti-discrimination laws regarding housing and employment. Under the US Fair Housing Act, it would be illegal for ads to explicitly state a
30 Apr 2018
In 2018 industry insiders revealed that the gambling industry was increasingly turning to data analytics and AI to personalise their services and predict and manipulate consumer response in order to keep gamblers hooked. Based on profiles assembled by examining every click, page view, and
14 May 2018
Three months after the 2018 discovery that Google was working on Project Maven, a military pilot program intended to speed up analysis of drone footage by automating classification of images of people and objects, dozens of Google employees resigned in protest. Among their complaints: Google
15 May 2018
In 2011, the US Department of Homeland Security funded research into a virtual border agent kiosk called AVATAR, for Automated Virtual Agent for Truth Assessments in Real-Time, and tested it at the US-Mexico border on low-risk travellers who volunteered to participate. In the following years, the
17 May 2018
In May 2018, US Immigration and Customs Enforcement abandoned the development of machine learning software intended to mine Facebook, Twitter, and the open Internet to identify terrorists. The software, announced in the summer of 2017, had been a key element of president Donald Trump's "extreme
18 May 2018
In May 2018, Google announced an AI system to carry out tasks such as scheduling appointments over the phone using natural language. A Duplex user wanting to make a restaurant booking, for example, could hand the task off to Duplex, which would make the phone call and negotiate times and numbers. In
15 Jun 2018
In June 2018, a panel set up to examine the partnerships between Alphabet's DeepMind and the UK's NHS express concern that the revenue-less AI subsidiary would eventually have to prove its value to its parent. Panel chair Julian Huppert said DeepMind should commit to a business model, either non
26 Jul 2018
In 2018, the chair of the London Assembly's police and crime committee called on London's mayor to cut the budget of the Mayor's Office for Policing and Crime, which provides oversight, in order to pay for AI systems. The intention was that the efficiencies of adopting AI would free up officers'
10 Sep 2018
In September 2018, AI Now co-founder Meredith Whittaker sounded the alarm about the potential for abuse of the convergence of neuroscience, human enhancement, and AI in the form of brain-computer interfaces. Part of Whittaker's concern was that the only companies with the computational power
21 Sep 2018
In 2017, the head of China’s security and intelligence systems, Meng Jianzhu, called on security forces to break down barriers to data sharing in order to use AI and cloud computing to find patterns that could predict and prevent terrorist attacks. Meng also called for increased integration of the
21 Sep 2018
In 2018 a report from the Royal United Services Institute found that UK police were testing automated facial recognition, crime location prediction, and decision-making systems but offering little transparency in evaluating them. An automated facial recognition system trialled by the South Wales
27 Sep 2018
Canada began experiments introducing automated decision-making algorithms into its immigration systems to support evaluation of some of the country's immigrant and visitor applications in 2014. In a 2018 study, Citizen Lab and NewsDeeply found that AI's use was expanding despite concerns about bias