Facebook and Instagram offer suicide and self-harm reporting tools

In 2016, Facebook and its photo-sharing subsidiary Instagram rolled out a new reporting tool that lets users anonymously flag posts that suggest friends are threatening self-harm or suicide. The act of flagging the post triggers a message from Instagram to the user in question offering support including access to a help line and suggestions such as calling a friend. These messages are also triggered if someone searches the service for certain terms such as "thinspo", which is associated with eating disorders. Facebook says it worked with a number of charities and acadmics to develop the tool. In 2017, the company added technology to automatically flag posts that might have expressions of suicidal thoughts for human analysis. In 2018, the company said the enhanced programme was flagging 20 times more postings for reviewers to examine, and twice as many people are receiving Facebook's suicide prevention support materials.

https://techcrunch.com/2016/10/19/instagram-tackles-self-harm-and-suicide-with-new-reporting-tools-support-options/
https://techcrunch.com/2016/06/14/facebook-suicide-prevention/.
https://www.cnbc.com/2018/02/21/how-facebook-uses-ai-for-suicide-prevention.html
Writer: Sarah Perez, Catherine Shu, Jordan Novet
 

What is Privacy International calling for?

People must know

People must be able to know what data is being generated by devices, the networks and platforms we use, and the infrastructure within which devices become embedded.  People should be able to know and ultimately determine the manner of processing.

Limit data analysis by design

As nearly every human interaction now generates some form of data, systems should be designed to limit the invasiveness of data analysis by all parties in the transaction and networking.

Control over intelligence

Individuals should have control over the data generated about their activities, conduct, devices, and interactions, and be able to determine who is gaining this intelligence and how it is to be used.

Identities under our control

Individuals must be able to selectively disclose their identity, generate new identities, pseudonyms, and/or remain anonymous. 

We should know all our data and profiles

Individuals need to have full insight into their profiles. This includes full access to derived, inferred and predicted data about them.

We may challenge consequential decisions

Individuals should be able to know about, understand, question and challenge consequential decisions that are made about them and their environment. This means that controllers too should have an insight into and control over this processing.