Twitter, automation, and amplification spread false stories faster than the truth


A 2018 study found that Twitter bots played a disproportionate role in spreading the false claim, made by US President Donald Trump shortly after winning the election but losing the popular vote in November 2016, that 3 million illegal immigrants had voted for Democratic opponent Hillary Clinton. After examining 14 million messages shared on Twitter between May 2016 and May 2017, Indiana University researchers found that just 6% of Twitter accounts identified as bots spread 31% of "low-credibility information" on the site - and thanks in part to automated amplification it took them only two to ten seconds. An earlier MIT Laboratory for Social Media study of 126,000 stories tweeted by 3 million people more than 4.5 million times between 2007 and 2017 reached similar conclusions: false stories travel farther, faster, deeper, and more broadly than the truth. The Indiana study found evidence that a class of bots deliberately targeted "influencers" to help the false stories spread, a finding that is consistent with those of a University of Southern California review of 4 million Twitter posts on the Catalan independence referendum. The latter's conclusion: users cannot tell if they are being manipulated, and regulation is needed.
[page not available]

Writer: Jennifer Ouellette
Publication: Ars Technica

Learn more
Related learning resources