Facebook study of "emotional contagion" turns users into subjects

In July 2014, a study conducted by Adam D. I. Kramer (Facebook), Jamie E. Guillory, and Jeffrey T. Hancock (both Cornell University) and published by the Proceedings of the National Academy of Sciences alerted Facebook users to the fact that for one week in 2012 689,003 of them had been the subjects of research into "emotional contagion". In the study, the researchers changed randomly selected users' newsfeeds to be more positive or negative to study whether those users then displayed a more positive or negative affect in response. The study found that they did - and also found that lowering the level of emotional content in either direction correlated with users posting less to the site. The experiment also showed the power of Facebook's control over the News Feed and the algorithms that determine which of the possible 1,500 pieces of content shows up at the top at any given moment.

Although the researchers did not change or read any of the postings and the original content was always available elsewhere, the fact that users were being manipulated incited public controversy. Under the terms of the prevailing privacy policy, the experiment was almost certainly legal; however few thought it was ethical. In a purely academic setting, such a study would normally be reviewed by the university's Institutional Review Board; however, in this case Cornell said that because Facebook was responsible for collecting and analyzing the data - and because the week-long experiment had already been conducted - no review from its Human Research Protection Program was required. 

https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/
https://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach
tags: Facebook, Cornell, research, emotional contagion, algorithms, engagement, experiments
Writer: Robinson Meyer, Chris Chambers
Publication: The Atlantic, Guardian
 

What is Privacy International calling for?

People must know

People must be able to know what data is being generated by devices, the networks and platforms we use, and the infrastructure within which devices become embedded.  People should be able to know and ultimately determine the manner of processing.

Limit data analysis by design

As nearly every human interaction now generates some form of data, systems should be designed to limit the invasiveness of data analysis by all parties in the transaction and networking.

Control over intelligence

Individuals should have control over the data generated about their activities, conduct, devices, and interactions, and be able to determine who is gaining this intelligence and how it is to be used.

Identities under our control

Individuals must be able to selectively disclose their identity, generate new identities, pseudonyms, and/or remain anonymous. 

We should know all our data and profiles

Individuals need to have full insight into their profiles. This includes full access to derived, inferred and predicted data about them.

We may challenge consequential decisions

Individuals should be able to know about, understand, question and challenge consequential decisions that are made about them and their environment. This means that controllers too should have an insight into and control over this processing.