Search
Content type: Examples
In October 2018, British home secretary Sajid Javid apologised to more than 400 migrants, who included Gurkha soldiers and Afghans who had worked for the British armed forces, who were forced to provide DNA samples when applying to live and work in the UK. DNA samples are sometimes provided by applicants to prove their relationship to someone already in the UK, but are not supposed to be mandatory. An internal review indicated that more people than the initially estimated 449 had received DNA…
Content type: Examples
In December 2018, Florida citizen Peter Sean Brown filed a federal lawsuit against the Monroe County Sheriff's offices for arresting and detaining him for three weeks claiming he was an illegal alien from Jamaica. Even though Brown offered to show the sheriff his birth certificate and explained he had been wrongfully detained 20 years before and the jail's own records listed his birthplace as Philadelphia, PA, the sheriff relied on a form sent by Immigration and Customs Enforcement. Brown…
Content type: Examples
In December 2018, in the wake of the Windrush scandal, the National Police Council, which represents police chiefs across England and Wales agreed to cease passing on to deportation authorities information about people suspected of being in the country illegally. The measures also ban officers from checking the police national computer solely to check on immigration status. Police said they believed that their too-close relationship with immigration authorities in aid of the government's "…
Content type: Examples
In a report released in December 2018, the UK's National Audit Office examined the management of information and immigrant casework at the Home Office that led to the refusal of services, detention, and removal of Commonwealth citizens who came to the UK and were granted indefinite leave to remain between 1948 and 1973, the so-called "Windrush generation" but never given documentation to prove their status. The NAO concludes that the Home Office failed to adequately consider its duty of care in…
Content type: Examples
In a November 2018 report based on a year's study of the use of data scores, Data Justice Lab provided a comprehensive look at the use of data-driven citizen scoring in government, particularly focusing on six case studies drawn from local councils in the UK. The report noted there is no systematic information about where and how these systems are being deployed, there are no standard practices or common approaches, and local government transparency varied widely. While some councils develop…
Content type: Examples
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating. Predictim based its scores in part on Facebook, Twitter, and Instagram posts - applicants were required to share broad access to their accounts - and offered no explanation of how it reached its risk…
Content type: Examples
In November 2018 reports emerged that immigrants heading north from Central America to the US border are more and more often ensuring they are accompanied by children because "family units" are known to be less likely to be deported, at least temporarily, and smugglers charge less than half as much when a minor is in the group because their risk is less. Some parents have given their children - sometimes for cash - to other adults such as a relative, godparent, or, sometimes, unrelated person.…
Content type: Examples
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs, including access to jobs, medical facilities, schools, and nearby migrants who speak the same language. Known as "Annie MOORE", refugees matched by the program were finding jobs within 90 days about a…
Content type: Examples
In November 2016 the UK Information Commissioner's Office issued an enforcement notice against London's Metropolitan Police, finding that there had been multiple and serious breaches of data protection law in the organisation's use of the Gangs Violence Matrix, which it had operated since 2012. The ICO documented failures of oversight and coherent guidance, and an absence of basic data protection practices such as encryption and agreements covering data sharing. Individuals whose details are…
Content type: Examples
As early as 2008, the Chinese telecommunications giant ZTE began helping Venezuela develop a system similar to the identity system used in China to track social, political, and economic behaviour. By 2018, Venezuela was rolling out its "carnet de la patria", a smart-card "fatherland" ID card that was being increasingly linked to the government-subsidised health, food, and other social programmes most Venezuelans relied on for survival. In 2017, Venezuela hired ZTE to build a comprehensive…
Content type: Examples
After an 18-month investigation involving interviews with 160 life insurance companies, in January 2019 New York Financial Services, the state's top financial regulator, announced it would allow life insurers to use data from social media and other non-traditional sources to set premium rates for its customers. Insurers will be required to demonstrate that their use of the information doesn't unfairly discriminate against specific customers. New York is the first state to issue specific…
Content type: Examples
A study published in February 2019 found that 95% of the predictive accuracy for an individual can be achieved solely by analysing their social ties; the person's data is not needed. Given as few as eight or nine of an individual's contacts, it should be possible to profile the individual even if they don't use the platform. This has profound privacy implications for protesters, elections, and misinformation campaigns.
https://www.nature.com/articles/s41562-018-0510-5.epdf
Writer: James P.…
Content type: Examples
A study published in January 2019 found that a form of facial recognition technology that interprets emotions in facial expressions assigns more negative emotions to black men's faces than white men's faces. The problem is the latest in a series of ways that facial recognition has failed for non-white subjects. The study used the official photographs of 400 professional NBA basketball players and found two types of bias: black faces were consistently scored as angrier than white faces for every…
Content type: Examples
In 2018, technology companies and medical providers were experimenting with machine learning and AI to mine health records and online posts to identify patterns linked to suicide, hoping to be able to predict, and therefore prevent, such attempts. On the academic side, a pilot programme conducted by the US Department of Veterans Affairs, REACH.VET, attempted to identify veterans at high risk for self-harm. On the commercial side, companies such as Facebook began experimenting with suicide…
Content type: Examples
In December 2018, Facebook provided an update on the civil rights audit it asked civil rights leader Laura Murphy to undertake in May. Based on advice Murphy culled from 90 civil society organisations, Facebook said it had expanded its policy prohibiting voter suppression, updated its policy to ban misrepresentation about how to vote, begun sending information about voting to third-party fact checkers for review, and was ramping up efforts to encourage voter registration and engagement.
https…
Content type: Examples
In December 2018 Walmart was granted a patent for a new listening system for capturing and analysing sounds in shopping facilities. The system would be able to compare rustling shopping bags and cash register beeps to detect theft, monitor employee interactions with customers, and even listen to what customers are saying about products. The company said it had no plans to deploy the system in its retail stores. However, the patent shows that, like the systems in use in Amazon's cashier-less Go…
Content type: Examples
The pregnancy apps many women were using in December 2018 proved to be incapable of handling miscarriages, even though up to 20% of all known pregnancies end this way. There are only two choices: allow the apps to continue sending alerts celebrating the pregnancy's progress or delete the pregnancy entirely, losing all the records they'd saved - information that doctors routinely request. Many menstruation apps, similarly, lack the ability to adapt to long breaks and disrupted cycles, and many…
Content type: Examples
In February 2019, the World Food Programme, a United Nations aid agency, announced a five-year, $45 million partnership with the data analytics company Palantir. WFP, the world's largest humanitarian organisation focusing on hunger and food security, hoped that Palantir, better known for partnering with police and surveillance agencies, could help analyse large amounts of data to create new insights from the data WFP collects from the 90 million people in 80 countries to whom it distributes 3…
Content type: Examples
The Lumi by Pampers nappies will track a child's urine (not bowel movements) and comes with an app that helps you "Track just about everything". The activity sensor that is placed on the nappy also tracks a baby's sleep.
Concerns over security and privacy have been raised, given baby monitors can be susceptible to hackers and any app that holds personal information could potentially expose that information.
Experts say the concept could be helpful to some parents but that there…
Content type: Examples
Le Monde exposed anti-IVG (anti-abortion) advertising on Facebook as part of a borader campaign led by anti-abortion website IVG.net. The advertisement relied on stock photos and fake testimonies posted in public Facebook groups and promoted to young women. Most of the posts attempt to promote the idea that abortion leads to mental health issues, a fact that has been proved to be falacious.
https://www.lemonde.fr/les-decodeurs/article/2018/07/11/les-anti-ivg-ciblent-les-jeunes-femmes-grace-aux…
Content type: Examples
French website IVG.net, first Google result when typing IVG (Interuption Volontaire de Grossesse or abortion in french), has been exposed as being anti-abortion website spreading misinformation. Offering an official looking "Numero vert" (free to call phone number number), IVG.net attempts to convince pregnant women calling the service that abortion is a high risk operation which will have terrible impact on their health and personal life, pressuring women to not undertake such operation. The…
Content type: Examples
In 2009, Amazon Kindle readers were surprised to find that their copies of George Orwell’s 1984 was missing from their devices. Amazon had remotely deleted these copies after it found out from the publisher that the third-party vendor selling them did not own the rights to the books. Amazon refunded the cost of the books but told its readers who were affected that they could no longer read the books and that the titles were “no longer available for purchase.” This was not the first time that…
Content type: Examples
From 2014 to early 2017, Amazon used an artificial intelligence (AI) hiring tool to review prospective employees’ resumes and select qualified candidates, based on Amazon’s previous hiring decisions from a ten-year period; however, the tool was much more effective at simply selecting male candidates, rather than the most qualified candidates, because Amazon had hired predominantly male candidates in the past. The hiring tool learned to discriminate against resumes that included the word “women’…
Content type: Examples
The American Civil Liberties Union (ACLU) used Rekognition, Amazon’s facial recognition software, to compare images of US lawmakers to a publicly available database of 25,000 mugshot photos. The ACLU’s study validated research that has shown that facial recognition technology is more likely to produce false matches for women and people with darker skin. Amazon’s software misidentified 28 lawmakers as being the people in mugshots, and these false matches were disproportionately of members of…
Content type: Examples
More than 450 Amazon employees delivered a letter to Jeff Bezos and other Amazon executives, demanding that the company immediately stop selling facial recognition software to law enforcement, sever connections to companies like Palantir that help immigration authorities track and deport immigrants, and provide greater employee oversight when making ethical decisions. The employees protested against Amazon allowing its technologies to be used for mass surveillance and abused by those in power,…
Content type: Examples
Amazon shareholders rejected two non-binding proposals governing its facial recognition software, Rekognition: one would have limited sales of Rekognition to governments, unless a board determined that such sales would not violate peoples’ rights, and the other was to study the extent to which Rekognition infringed peoples’ privacy and other rights. These two proposals were presented to shareholders despite Amazon’s effort to stop the votes, which were thwarted by the US Securities and Exchange…
Content type: Examples
Millions of people own smart home devices like the Amazon Echo and Echo Dot—equipped with the Alex cloud-based artificial intelligence service—which have concerning implications for privacy rights. While, Amazon’s own policies promise that only the user and Amazon will listen to what those devices record, it was recently reported that Amazon failed to follow its own policy when it erroneously shared one user’s information with a total stranger.
In August 2018, a German customer exercised his…
Content type: Examples
A 19-year-old medical student was raped and drowned in the River Dresiam in October 2016. The police identified the accused by a hair found at the scene of the crime. The data recorded by the health app on his phone helped identify his location and recorded his activities throughout the day. A portion of his activity was recorded as “climbing stairs”, which authorities were able to correlate with the time he would have dragged his victim down the river embankment, and then climbed…
Content type: Examples
The body of a 57-year-old was found in the laundry room of her home in Valley View, Adelaide, in September 2016. Her daughter-in-law who was in the house at the time of the murder claimed that she was tied up by a group of men who entered the house and managed to escape when they left. However, the data from the victim's smartwatch did not corroborate her story.The prosecution alleged that the watch had recorded data consistent with a person going into shock and losing consciousness. "The…
Content type: Examples
The 90-year old suspect when to his stepdaughter's house at San Jose, California for a brief visit. Five days later, his stepdaugter's body, Karen was discovered by a co-worker in her house with fatal lacerations on her head and neck. The police used the data recorded by the victim's Fitbit fitness tracker to determine the time of the murder. It was been reported that the Fitbit data showed that her heart rate had spiked significantly around 3:20 p.m. on September 8, when her stepfather was…