Advanced Search
Content Type: Examples
In December 2018, in the wake of the Windrush scandal, the National Police Council, which represents police chiefs across England and Wales agreed to cease passing on to deportation authorities information about people suspected of being in the country illegally. The measures also ban officers from checking the police national computer solely to check on immigration status. Police said they believed that their too-close relationship with immigration authorities in aid of the government's "…
Content Type: Examples
In a report released in December 2018, the UK's National Audit Office examined the management of information and immigrant casework at the Home Office that led to the refusal of services, detention, and removal of Commonwealth citizens who came to the UK and were granted indefinite leave to remain between 1948 and 1973, the so-called "Windrush generation" but never given documentation to prove their status. The NAO concludes that the Home Office failed to adequately consider its duty of care in…
Content Type: Examples
In a November 2018 report based on a year's study of the use of data scores, Data Justice Lab provided a comprehensive look at the use of data-driven citizen scoring in government, particularly focusing on six case studies drawn from local councils in the UK. The report noted there is no systematic information about where and how these systems are being deployed, there are no standard practices or common approaches, and local government transparency varied widely. While some councils develop…
Content Type: Examples
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating. Predictim based its scores in part on Facebook, Twitter, and Instagram posts - applicants were required to share broad access to their accounts - and offered no explanation of how it reached its risk…
Content Type: Examples
In November 2018 reports emerged that immigrants heading north from Central America to the US border are more and more often ensuring they are accompanied by children because "family units" are known to be less likely to be deported, at least temporarily, and smugglers charge less than half as much when a minor is in the group because their risk is less. Some parents have given their children - sometimes for cash - to other adults such as a relative, godparent, or, sometimes, unrelated person.…
Content Type: Examples
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs, including access to jobs, medical facilities, schools, and nearby migrants who speak the same language. Known as "Annie MOORE", refugees matched by the program were finding jobs within 90 days about a…
Content Type: Examples
In November 2016 the UK Information Commissioner's Office issued an enforcement notice against London's Metropolitan Police, finding that there had been multiple and serious breaches of data protection law in the organisation's use of the Gangs Violence Matrix, which it had operated since 2012. The ICO documented failures of oversight and coherent guidance, and an absence of basic data protection practices such as encryption and agreements covering data sharing. Individuals whose details are…
Content Type: Examples
As early as 2008, the Chinese telecommunications giant ZTE began helping Venezuela develop a system similar to the identity system used in China to track social, political, and economic behaviour. By 2018, Venezuela was rolling out its "carnet de la patria", a smart-card "fatherland" ID card that was being increasingly linked to the government-subsidised health, food, and other social programmes most Venezuelans relied on for survival. In 2017, Venezuela hired ZTE to build a comprehensive…
Content Type: Examples
After an 18-month investigation involving interviews with 160 life insurance companies, in January 2019 New York Financial Services, the state's top financial regulator, announced it would allow life insurers to use data from social media and other non-traditional sources to set premium rates for its customers. Insurers will be required to demonstrate that their use of the information doesn't unfairly discriminate against specific customers. New York is the first state to issue specific…
Content Type: Examples
A study published in February 2019 found that 95% of the predictive accuracy for an individual can be achieved solely by analysing their social ties; the person's data is not needed. Given as few as eight or nine of an individual's contacts, it should be possible to profile the individual even if they don't use the platform. This has profound privacy implications for protesters, elections, and misinformation campaigns.
https://www.nature.com/articles/s41562-018-0510-5.epdf
Writer: James P.…
Content Type: Examples
A study published in January 2019 found that a form of facial recognition technology that interprets emotions in facial expressions assigns more negative emotions to black men's faces than white men's faces. The problem is the latest in a series of ways that facial recognition has failed for non-white subjects. The study used the official photographs of 400 professional NBA basketball players and found two types of bias: black faces were consistently scored as angrier than white faces for every…
Content Type: Examples
In 2018, technology companies and medical providers were experimenting with machine learning and AI to mine health records and online posts to identify patterns linked to suicide, hoping to be able to predict, and therefore prevent, such attempts. On the academic side, a pilot programme conducted by the US Department of Veterans Affairs, REACH.VET, attempted to identify veterans at high risk for self-harm. On the commercial side, companies such as Facebook began experimenting with suicide…
Content Type: Examples
In December 2018, Facebook provided an update on the civil rights audit it asked civil rights leader Laura Murphy to undertake in May. Based on advice Murphy culled from 90 civil society organisations, Facebook said it had expanded its policy prohibiting voter suppression, updated its policy to ban misrepresentation about how to vote, begun sending information about voting to third-party fact checkers for review, and was ramping up efforts to encourage voter registration and engagement.
https…
Content Type: Examples
In December 2018 Walmart was granted a patent for a new listening system for capturing and analysing sounds in shopping facilities. The system would be able to compare rustling shopping bags and cash register beeps to detect theft, monitor employee interactions with customers, and even listen to what customers are saying about products. The company said it had no plans to deploy the system in its retail stores. However, the patent shows that, like the systems in use in Amazon's cashier-less Go…
Content Type: Examples
The pregnancy apps many women were using in December 2018 proved to be incapable of handling miscarriages, even though up to 20% of all known pregnancies end this way. There are only two choices: allow the apps to continue sending alerts celebrating the pregnancy's progress or delete the pregnancy entirely, losing all the records they'd saved - information that doctors routinely request. Many menstruation apps, similarly, lack the ability to adapt to long breaks and disrupted cycles, and many…
Content Type: Examples
In February 2019, the World Food Programme, a United Nations aid agency, announced a five-year, $45 million partnership with the data analytics company Palantir. WFP, the world's largest humanitarian organisation focusing on hunger and food security, hoped that Palantir, better known for partnering with police and surveillance agencies, could help analyse large amounts of data to create new insights from the data WFP collects from the 90 million people in 80 countries to whom it distributes 3…
Content Type: People
Caitlin is a Senior Campaigns Officer at Privacy International. She works to develop our public campaigning and with our international partners. Caitlin is also responsible for our volunteer programme.
Content Type: News & Analysis
Picture: XoMEoX CC BY 2.0
1. Definitions of ‘fraud’ lack transparency and are often deceptive. States often define ‘fraud’ in vague and overbroad terms, which creates a seemingly compelling catch-all justification for denying or terminating benefits. The general public will often support this political narrative unless they have a greater understanding of the realities facing social benefit claimants and their experience navigating confusing and complex social benefits systems.…
Content Type: News & Analysis
Picture: Antti T. Nissinen, CC BY 2.0
In addition to the issues we highlighted in stage 1, where intrusive personal information is required in order to apply for social benefits, recipients who seek to maintain their social benefits are required to regularly disclose similar information and are also subjected to the numerous forms of surveillance described above.
1. Social benefits systems use monitoring to exert control over recipients. These systems are imbued with the…
Content Type: News & Analysis
Picture: Christian Schnettelker
1. The process of applying for social benefits subjects people to humiliating and punishing scrutiny. It is gruelling and harmful in and of itself. It requires people to invest significant time and resources, and to disclose vast amounts of personal information. For example, people may be required to turn over troves of personal documents (such as documents that show people’s financial status, housing, income, family structure, and identity), provide biometric…
Content Type: News & Analysis
On Tuesday, Twitter disclosed that it may have shared data on users with advertising partners, even if they have opted out from personalised ads, and shown people ads based on inferences made about the devices they use without permission. According to Twitter, the issue was fixed on Monday, even though it is not yet clear how many users have been affected.
This is not the first time that Twitter had to admit that it leaked user data to advertisers. In May 2019, the social…
Content Type: Long Read
Image credit: Emil Sjöblom [ShareAlike 2.0 Generic (CC BY-SA 2.0)]
Prepaid SIM card use and mandatory SIM card registration laws are especially widespread in countries in Africa: these two factors can allow for a more pervasive system of mass surveillance of people who can access prepaid SIM cards, as well as exclusion from important civic spaces, social networks, and education and health care for people who cannot.
Mandatory SIM card registration laws require that people provide personal…
Content Type: Explainer
Recently the role of social media and search platforms in political campaigning and elections has come under scrutiny. Concerns range from the spread of disinformation, to profiling of users without their knowledge, to micro-targeting of users with tailored messages, to interference by foreign entities, and more. Significant attention has been paid to the transparency of political ads - what are companies doing to provide their users globally with meaningful transparency into how they…
Content Type: Advocacy
Privacy International provided comments to the UK Financial Conduct Authority on the Terms of Reference to its Credit Information Market Study.
We highlighted that:
Credit data (whether ‘traditional’ credit data; data from Open Banking sources, or other sources of data like social media) are hugely revealing of people’s lives far beyond the state of their financial affairs.
The affects upon consumer behaviour of this use of data in the credit sector extends beyond the choices they…
Content Type: Advocacy
Dear Chair and Committee colleagues,
Privacy International is an international NGO, based in London, which works with partners around the world to challenge state and corporate surveillance and data exploitation. As part of our work, we have a dedicated programme “Defending Democracy and Dissent” where we advocate for limits on data exploitation throughout the electoral cycle.
We have been closely following the important work of the Committee. Prompted by the additional evidence provided…
Content Type: News & Analysis
Image: The Great Hack publicity still, courtesy of Netflix.
This is a review of the documentary 'The Great Hack' originally published on IMDb.
This documentary is a fascinating account of The Facebook/Cambridge Analytica data scandal.
In early 2018, Cambridge Analytica became a household name. The company had exploited the personal data of millions of Facebook users, without their knowledge or consent, and used it for political propaganda.
At a running time of almost two hours, The Great…
Content Type: Examples
The Lumi by Pampers nappies will track a child's urine (not bowel movements) and comes with an app that helps you "Track just about everything". The activity sensor that is placed on the nappy also tracks a baby's sleep.
Concerns over security and privacy have been raised, given baby monitors can be susceptible to hackers and any app that holds personal information could potentially expose that information.
Experts say the concept could be helpful to some parents but that there…
Content Type: Advocacy
On 26 July 2019, Privacy International sent the attached written evidence submission to the UK All Parliamentary Party Group on Electoral Campaigning Transparency.
In the UK, All-Party Groups (APPGs) are informal groups of Members of both the House of Commons and House of Lords with a common interest in particular issues.
Content Type: Advocacy
As an organisation that has been fighting to protect people's data and privacy since the 1990s, we were left speechless by your redemption of Cambridge Analytica’s former staff ("What Cambridge Analytica’s ex-staff can teach us about data defence”, Gillan Tett, July 24, 2019).
How can we prevent a repeat of the Cambridge Analytica scandal?, Tett asks, yet fails to even mention the core solution that is universally proposed by privacy professionals around the world: comprehensive…