Discrimination

27 Sep 2018
Canada began experiments introducing automated decision-making algorithms into its immigration systems to support evaluation of some of the country's immigrant and visitor applications in 2014. In a 2018 study, Citizen Lab and NewsDeeply found that AI's use was expanding despite concerns about bias
In 2012, Durham Constabulary, in partnership with computer science academics at Cambridge University, began developing the Harm Assessment Risk Tool (HART), an artificial intelligence system designed to predict whether suspects are at low, moderate, or high risk of committing further crimes in the
In a draft January 2018 report obtained by Foreign Policy and produced at the request of US Customs and Border Protection Commissioner Kevin McAleenan, the Department of Homeland Security called for continuous vetting of Sunni Muslim immigrants deemed to have "at-risk" profiles. Based on studying 25
30 Nov 2017
A report for the US National Academy of Sciences explains the methods used by a team of computer scientists to derive accurate, neighbourhood-level estimates of the racial, economic, and political characteristics of 200 US cities using the images collected by Google Street View in 2013 and 2014. The
Mothers of black, male teenagers in Chicago, fear their children will be added to the Chicago Police Department's gang database. As of the end of 2017, the database contains the names of 130,000 people, 90% of them black or Latino, who are suspected of being gang members. Most have never been
20 Dec 2017
Research from ProPublica in December 2017 found that dozens of companies, including Verizon, Amazon, and Target are using Facebook to target job ads to exclude older workers. Excluding older workers is illegal under US law, but Facebook's system allows advertisers to specify precisely who should see
A paper by Michael Veale (UCL) and Reuben Binns (Oxford), "Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data", proposes three potential approaches to deal with hidden bias and unfairness in algorithmic machine learning systems. Often, the cause is
In the remote western city Xinjiang, the Chinese government is using new technology and humans to monitor every aspect of citizens' lives. China, which has gradually increased restrictions in the region over the last ten years in response to unrest and violent attacks, blames the need for these
03 Sep 2016
In May 2014 the Polish Ministry of Labor and Social Policy (MLSP) introduced a scoring system to distribute unemployment assistance. Citizens are divided into three categories by their “readiness” to work, the place they live, disabilities and other data. Assignment to a given category determines
18 Oct 2016
A 2016 report, "The Perpetual Lineup", from the Center for Privacy and Technology at Georgetown University's law school based on records from dozens of US police departments found that African-Americans are more likely to have their images captured, analysed, and reviewed during computerised
20 May 2015
In 2015, a newly launched image recognition function built into Yahoo's Flickr image hosting site automatically tagged images of black people with tags such as "ape" and "animal", and also tagged images of concentration camps with "sport" or "jungle gym". The company responded to user complaints by
04 Feb 2013
In 2013, Harvard professor Latanya Sweeney found that racial discrimination pervades online advertising delivery. In a study, she found that searches on black-identifying names such as Revon, Lakisha, and Darnell are 25% more likely to be served with an ad from Instant Checkmate offering a
08 Apr 2016
In September 2016, the US Federal Trade Commission hosted a workshop to study the impact of big data analysis on poor people, whose efforts to escape poverty may be hindered by the extensive amounts of data being gathered about them. Among those who intensively surveil low-income communities are
10 May 2016
In 2016, VICE News discovered that the confidential and "shadowy" World-Check database, which has wrongly linked individuals to terrorist activity, was being widely used by British police and intelligence. Also a customer is the Charity Commission, which uses it to screen charities and aid
21 Mar 2016
By 2016, numerous examples had surfaced of bias in facial recognition systems that meant they failed to recognise non-white faces, labelled non-white people as "gorillas", "animals", or "apes" (Google, Flickr), told Asian users their eyes were closed when taking photographs (Nikon), or tracked white
05 Sep 2016
In September 2016, an algorithm assigned to pick the winners of a beauty contest examined selfies sent in by 600,000 entrants from India, China, the US, and all over Africa, and selected 44 finalists, almost all of whom were white. Of the six non-white finalists, all were Asian and only one had