Investigating Ad Transparency Mechanisms in Social Media: A Case Study of Facebook's Explanations

In 2017, Facebook introduced two mechanisms intended to give users greater transparency about its data practices: the "why am I seeing this?" button users can click to get an explanation of why they're being shown a particular ad, and an Ad Preferences page that shows users a list of attributes the system has inferred about them. To examine the level of transparency provided by the respective ad explanations and data explanations these two mechanisms provide, a group of researchers conducted a series of controlled ad campaigns to examine the explanations Facebook provides. They found that the ad explanations are frequently incomplete and sometimes misleading, while the data explanations are often incomplete and vague. 

These results have two consequences. First, they mean that users will form an incorrect understanding of how they are targeted. Second, they suggest the possibility that malicious advertisers may be able to take advantage of common attributes in order to mask the more sensitive, less common attributes they're actually targeting. As Twitter has also launched similar "explanations", the researchers are concerned that these problems will spread. The researchers would like to follow up on their work by studying how users react to different explanations that might be provided to explore the consequences of design choices. 

In order to better understand Facebook's advertising mechanisms and provide better transparency, the researchers also built a tool, AdAnalyst, that works on top of Facebook and provides better explanations.


https://people.mpi-sws.org/~gummadi/papers/fb_explanations.pdf
http://adanalyst.mpi-sws.org
Writer: Athanasios Andreou, Giridhari Venkatadri, Oana Goga, Krishna P. Gummadi, Patrick Loiseau, Alan Mislove