In 2012, Durham Constabulary, in partnership with computer science academics at Cambridge University, began developing the Harm Assessment Risk Tool (HART), an artificial intelligence system designed to predict whether suspects are at low, moderate, or high risk of committing further crimes in the
A paper by Michael Veale (UCL) and Reuben Binns (Oxford), "Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data", proposes three potential approaches to deal with hidden bias and unfairness in algorithmic machine learning systems. Often, the cause is
The first signs of the combination of AI and surveillance are beginning to emerge. In December 2017, the digital surveillance manufacturer IC Realtime, launched a web and app platform named Ella that uses AI to analyse video feeds and make them instantly searchable - like a Google for CCTV. Company
Designed for use by border guards, Unisys' LineSight software uses advanced data analytics and machine learning to help border guards decide whether to inspect travellers more closely before admitting them into their country. Unisys says the software assesses each traveller's risk beginning with the
In a study of COMPAS, an algorithmic tool used in the US criminal justice system , Dartmouth College researchers Julia Dressel and Hany Farid found that the algorithm did no better than volunteers recruited via a crowdsourcing site. COMPAS, a proprietary risk assessment algorithm developed by
A new examination of documents detailing the US National Security Agency's SKYNET programme shows that SKYNET carries out mass surveillance of Pakistan's mobile phone network and then uses a machine learning algorithm to score each of its 55 million users to rate their likelihood of being a
04 Feb 2013
In 2013, Harvard professor Latanya Sweeney found that racial discrimination pervades online advertising delivery. In a study, she found that searches on black-identifying names such as Revon, Lakisha, and Darnell are 25% more likely to be served with an ad from Instant Checkmate offering a
24 Jan 2014
In 2014, DataKind sent two volunteers to work with GiveDirectly, an organisation that makes cash donations to poor households in Kenya and Uganda. In order to better identify villages with households that are in need, the volunteers developed an algorithm that classified village roofs in satellite
20 May 2015
In 2015, a newly launched image recognition function built into Yahoo's Flickr image hosting site automatically tagged images of black people with tags such as "ape" and "animal", and also tagged images of concentration camps with "sport" or "jungle gym". The company responded to user complaints by
22 Jun 2015
In 2015, Facebook's AI lab announced that its researchers had devised an experimental algorithm that could recognise people in photographs even when their faces are hidden or turned away. The researchers trained a sophisticated neural network on a dataset of 40,000 photographs taken from Flickr
27 Jun 2015
A 2015 study by The Learning Curve found that although 71% of parents believe technology has improved their child's education, 79% were worried about the privacy and security of their child's data, and 75% were worried that advertisers had access to that data. At issue is the privacy and security
23 Mar 2016
In 2016, the Big Data lab at the Chinese search engine company Baidu published a study of an algorithm it had developed that it claimed could predict crowd formation and suggested it could be used to warn authorities and individuals of public safety threats stemming from unusually large crowds. The
03 May 2016
In 2012, London Royal Free, Barnet, and Chase Farm hospitals agreed to provide Google's DeepMind subsidiary with access to an estimated 1.6 million NHS patient records, including full names and medical histories. The company claimed the information, which would remain encrypted so that employees
23 May 2016
Computer programs that perform risk assessments of crime suspects are increasingly common in American courtrooms, and are used at every stage of the criminal justice systems to determine who may be set free or granted parole, and the size of the bond they must pay. By 2016, the results of these
01 Jun 2016
The price of using voice search is that Google records many of the conversations that take place in their presence. Users wishing to understand what Google has captured can do so by accessing the portal the company introduced in 2015. Their personal history pages on the site include both a page
05 Sep 2016
In September 2016, an algorithm assigned to pick the winners of a beauty contest examined selfies sent in by 600,000 entrants from India, China, the US, and all over Africa, and selected 44 finalists, almost all of whom were white. Of the six non-white finalists, all were Asian and only one had
21 Sep 2016
In 2016, researchers at MIT's Computer Science and Artificial Intelligence Laboratory developed a new device that uses wireless signals that measure heartbeats by bouncing off a person's body. The researchers claim that this system is 87% accurate in recognising joy, pleasure, sadness, or anger
27 Sep 2016
In 2016 researchers at the University of Texas at Austin and Cornell University demonstrated that a neural network trained on image datasets can successfully identify faces and objects that have been blurred, pixellated, or obscured by the P3 privacy system. In some cases, the algorithm performed
24 Nov 2016
In 2016 researchers in China claimed an experimental algorithm could correctly identify criminals based on images of their faces 89% of the time. The research involved training an algorithm on 90% of a dataset of 1,856 photos of Chinese males between 18 and 55 with no facial hair or markings. Among
04 Sep 2017
The UK Information Commissioner's Office has published policy guidelines for big data, artificial intelligence, machine learning and their interaction with data protection law. Applying data protection principles becomes more complex when using these techniques. The volume of data, the ways it's