Op-ed: The Unequal Application of Advertising Transparency

The Unequal Application of Advertising Transparency

This op-ed originally appeared on the Atlantic Council's Disinfo Portal.

While these concerns are held by societies globally, Privacy International’s (PI) recent analysis shows that in jurisdictions where companies have been under pressure to act—by governments and institutions such as the EU, or civil society—they have adopted self-regulatory practices. But they have failed to apply this heightened transparency elsewhere.

The role of social media and search engine companies in political campaigning and elections is under scrutiny. Concerns range from the spread of disinformation and profiling users without their knowledge, to micro-targeting users with tailored messages, and foreign election interference. Significant attention has been paid to the transparency of political ads, and more broadly to the transparency of online ads.

While these concerns are held by societies globally, Privacy International’s (PI) recent analysis shows that in jurisdictions where companies have been under pressure to act—by governments and institutions such as the EU, or civil society—they have adopted self-regulatory practices. But they have failed to apply this heightened transparency elsewhere.

In our analysis, PI examined the steps Facebook, Google, and Twitter have taken to provide advertising transparency to their users globally. We found that the companies have taken a blatantly fragmented approach to providing users with such transparency. Most users around the world lack meaningful insight into how ads are being targeted through these platforms.

For example, we found that Facebook requires heightened transparency for political ads in just thirty-five countries—roughly 17 percent. This means that in 83 percent of countries the company does not require political advertisers to become authorized, for political ads to carry disclosures, or for ads to be archived. However, even when Facebook requires political advertisers in a country to become authorized—in advance of the 2019 elections in Argentina and Ukraine, for example—PI has heard from local organizations that the timing of the roll out was far too late to be meaningful, among other problems. PI’s Indonesian partners ELSAM found that the problem was particularly bad in their country, which has a population of 264 million, recently held elections, and where Facebook is the most-used social media platform. There the social media network provides no political ad transparency.

Google, on the other hand, provides heightened transparency for political ads in just thirty countries—roughly 15 percent of the countries in the world. The transparency it provides users about who is being targeted with those ads is shockingly lax—it provides only ranges of impressions that an ad made, such as ten thousand to one hundred thousand, instead of exact impression data. This means that, in effect, it is not possible to understand political micro-targeting on Google.

Furthermore, Google has not defined what it considers to be political issues, and therefore insight into what issue ads have run or are running, and to whom they are being shown, is impossible. It is telling that Google has in the past earned 32.6 billion US dollars in one quarter from digital ads, but is unwilling to provide users with meaningful transparency about how those ads target them. Google did not respond to PI requests.

Twitter recently announced that it would ban political advertising. PI is concerned that such a ban will let the platform off the hook for its invasive and opaque ad-targeting capabilities. Before the announcement, Twitter provided heightened transparency for ads tied to specific elections rather than political ads more generally in only thirty-two countries, roughly 16 percent of the countries in the world.

Outside of the United States, Twitter did not treat political ads or political issue ads differently from promoted tweets, meaning that these ads—which are political, but not tied to an election—ran without transparency. For example, within the analysis, PI showed how a United Kingdom Brexit campaign ad was run without being marked as political, and therefore no targeting information was provided. The ad has since been deleted from Twitter’s archive. It’s unclear if and how Twitter’s ban on political ads will also address promoted ads, which are political in nature but do not related to an election.

Unfortunately, well-financed political actors will likely be able to work around the bans and find other ways to use the platforms’ advertising systems to reach their desired audiences, and smaller political actors—whose budgets depend on reaching people via these platforms—may be sidelined and silenced.

But, simply banning political ads misses the larger picture; Twitter, Facebook, and Google’s business models are dependent on problematic data collection and opaque targeting. PI believes that the ‘back end’ of content—the design choices, algorithms, and data which ultimately drive and shape the content that we see—deserves more attention and consideration for the knock-on role it can have on political campaigns and democratic debate.

Facebook, Google, and Twitter’s deliberate decision to provide some users with better transparency and others with nothing, is unacceptable. At present, people using these platforms are not able to understand why they are targeted with any ad, much less a political one. How advertisers target users on social media and search engine platforms is incredibly complex and can involve targeting tools provided by platforms, or data compiled by advertisers and their sources, such as data brokers. Given the granularity with which advertisers are able to target users on these platforms, they ought to provide much more information about how advertisers are targeting users and why users are seeing an ad.

Sara Nelson works on advertising transparency as a part of Privacy International’s Defending Democracy and Dissent portfolio. She headed up PI’s analysis on shortcomings of online platforms commitments as outlined in European Commission Code of Practice on Disinformation, as well as globally, in relation to online political and issue-based advertising transparency.