AI

In 2012, Durham Constabulary, in partnership with computer science academics at Cambridge University, began developing the Harm Assessment Risk Tool (HART), an artificial intelligence system designed to predict whether suspects are at low, moderate, or high risk of committing further crimes in the
27 Feb 2018
Under a secret deal beginning in 2012, the data mining company Palantir provided software to a New Orleans Police Department programme that used a variety of data such as ties to gang members, criminal histories, and social media to predict the likelihood that individuals would commit acts of
20 Feb 2018
In 2018, pending agreement from its Institutional Review Board, the University of St Thomas in Minnesota will trial sentiment analysis software in the classroom in order to test the software, which relies on analysing the expressions on students' faces captured by a high-resolution webcam
In a study of COMPAS, an algorithmic tool used in the US criminal justice system , Dartmouth College researchers Julia Dressel and Hany Farid found that the algorithm did no better than volunteers recruited via a crowdsourcing site. COMPAS, a proprietary risk assessment algorithm developed by
The first signs of the combination of AI and surveillance are beginning to emerge. In December 2017, the digital surveillance manufacturer IC Realtime, launched a web and app platform named Ella that uses AI to analyse video feeds and make them instantly searchable - like a Google for CCTV. Company
02 Jan 2018
In February 2018 the Canadian government announced a three-month pilot partnership with the artificial intelligence company Advanced Symbolics to monitor social media posts with a view to predicting rises in regional suicide risk. Advanced Symbolics will look for trends by analysing posts from 160
02 Jan 2018
EU antitrust regulators are studying how companies gather and use big data with a view to understanding how access to data may close off the market to smaller, newer competitors. Among the companies being scrutinised are the obvious technology companies, such as Google and Facebook, and less obvious
11 Jan 2018
In 2017, a study claimed to have shown that artificial intelligence can infer sexual orientation from facial images, reviving the kinds of claims made in the 19th century about inferring character from outer appearance. Despite widespread complaints and criticisms, the study, by Michal Kosinski and
04 Sep 2017
The UK Information Commissioner's Office has published policy guidelines for big data, artificial intelligence, machine learning and their interaction with data protection law. Applying data protection principles becomes more complex when using these techniques. The volume of data, the ways it's
17 Oct 2017
A mistake in Facebook's machine translation service led to the arrest and questioning of a Palestinian man by Israeli police. The man, a construction worker on the West Bank, posted a picture of himself leaning against a bulldozer like those that have been used in hit-and-run terrorist attacks, with
A paper by Michael Veale (UCL) and Reuben Binns (Oxford), "Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data", proposes three potential approaches to deal with hidden bias and unfairness in algorithmic machine learning systems. Often, the cause is
04 Oct 2017
In 2017, after protests from children's health and privacy advocates, Mattel cancelled its planned child-focused "Aristotle" smart hub. Aristotle was designed to adapt to and learn about the child as they grew while controlling devices from night lights to homework aids. However, Aristotle was only
20 May 2015
In 2015, a newly launched image recognition function built into Yahoo's Flickr image hosting site automatically tagged images of black people with tags such as "ape" and "animal", and also tagged images of concentration camps with "sport" or "jungle gym". The company responded to user complaints by
04 Feb 2013
In 2013, Harvard professor Latanya Sweeney found that racial discrimination pervades online advertising delivery. In a study, she found that searches on black-identifying names such as Revon, Lakisha, and Darnell are 25% more likely to be served with an ad from Instant Checkmate offering a
24 Jan 2014
In 2014, DataKind sent two volunteers to work with GiveDirectly, an organisation that makes cash donations to poor households in Kenya and Uganda. In order to better identify villages with households that are in need, the volunteers developed an algorithm that classified village roofs in satellite
27 Jun 2015
A 2015 study by The Learning Curve found that although 71% of parents believe technology has improved their child's education, 79% were worried about the privacy and security of their child's data, and 75% were worried that advertisers had access to that data. At issue is the privacy and security